id
stringlengths 10
10
| title
stringlengths 26
192
| abstract
stringlengths 172
1.92k
| authors
stringlengths 7
591
| published_date
stringlengths 20
20
| link
stringlengths 33
33
| markdown
stringlengths 269
344k
|
---|---|---|---|---|---|---|
2304.06836 | Neural Network Architectures for Optical Channel Nonlinear Compensation
in Digital Subcarrier Multiplexing Systems | In this work, we propose to use various artificial neural network (ANN)
structures for modeling and compensation of intra- and inter-subcarrier fiber
nonlinear interference in digital subcarrier multiplexing (DSCM) optical
transmission systems. We perform nonlinear channel equalization by employing
different ANN cores including convolutional neural networks (CNN) and long
short-term memory (LSTM) layers. We start to compensate the fiber nonlinearity
distortion in DSCM systems by a fully connected network across all subcarriers.
In subsequent steps, and borrowing from fiber nonlinearity analysis, we
gradually upgrade the designs towards modular structures with better
performance-complexity advantages. Our study shows that putting proper macro
structures in design of ANN nonlinear equalizers in DSCM systems can be crucial
for practical solutions in future generations of coherent optical transceivers. | Ali Bakhshali, Hossein Najafi, Behnam Behinaein Hamgini, Zhuhong Zhang | 2023-04-13T21:58:23Z | http://arxiv.org/abs/2304.06836v1 | Neural Network Architectures for Optical Channel Nonlinear Compensation in Digital Subcarrier Multiplexing Systems
###### Abstract
In this work, we propose to use various artificial neural network (ANN) structures for modeling and compensation of intra- and inter-subcarrier fiber nonlinear interference in digital subcarrier multiplexing (DSCM) optical transmission systems. We perform nonlinear channel equalization by employing different ANN cores including convolutional neural networks (CNN) and long short-term memory (LSTM) layers. We start to compensate the fiber nonlinearity distortion in DSCM systems by a fully-connected network across all subcarriers. In subsequent steps, and borrowing from fiber nonlinearity analysis, we gradually upgrade the designs towards modular structures with better performance-complexity advantages. Our study shows that putting proper macro structures in design of ANN nonlinear equalizers in DSCM systems can be crucial for practical solutions in future generations of coherent optical transceivers.
Ottawa Optical Competency Center, Huawei Technologies Canada, 303 Terry Fox Dr, Kanata K2K 3J1
\({}^{*}\)ali.bakhshali@huawei.com
## 1 Introduction
For high-speed long-haul fiber-optic transmission, the nonlinear interference from Kerr effect is a major bottleneck that limits the achievable transmission rates. This interference can be equalized by approximating and inversing the nonlinear Schrodinger equation through digital back-propagation (DBP) [1, 2, 3] or perturbation-based nonlinear compensation (PNLC) [4, 5]. However, large computational complexities and also the need for accurate data from the propagation link have limited their application in real-time processing with agile and flexible requirements.
Alternatively, a variety of ANN solutions have been recently proposed for fiber nonlinearity compensation application. The primitive works tried to squeeze additional performance by feeding triplets inspired from perturbation analysis of fiber nonlinearity to a feed-forward neural network [6]. Later works draw inspiration from DBP and aimed to incorporate deep convolutional neural networks (CNNs) for this tasks [7, 8]. The use of advance recurrent neural networks (RNNs), such as long short-term memory (LSTM) modules, which are more suitable for the equalization of time-series processes has also picked up a great interest [9, 10]. In fact, the pattern and medium dependent characteristics of nonlinear propagation make it a suitable problem to be tackled by variety of solutions from artificial neural network domain. In general, an ANN-based nonlinear equalizer is more flexible compared to the conventional methods in the sense that it can be better updated for different transmission scenarios without the need for accurate feedback of the channel parameters. Also, ANN nonlinear equalizers can be extended to include the functionalities of traditional DSP modules to create a more general equalizer. Furthermore, the ANN design where the compensation process is learned through data can potentially lead to a large reduction in computational complexity [11].
In this work, we consider an application of ANN in coherent optical communications. We focus on advanced ANN structures with the ability to generate appropriate features without any reliance on an external module for feature generation. We particularly study digital subcarrier multiplexing (DSCM) systems since their design flexibility makes it a promising solution for the coherent optical modems. Simplifying the DSP with lower speed processing per-subcarrier, flexible channel-matched transmission, robust clock recovery, and the easy transition to a
point-to-multi-point (P2MP) architecture are some of the advantages of DSCM systems.
Here, we develop _macro_ ANN structures, inspired by the fiber nonlinearity distortion mechanism that governs the nonlinear interaction across different subcarriers that has shown to be more efficient in terms of inference complexity, model representation, and the ability to be trained. We propose various ANN structures for modeling and compensation of intra- and inter-subcarrier fiber nonlinearities in DSCM systems, and explore scalability and performance versus complexity tradeoffs of the presented solutions. Different models are designed in terms of how received symbols across digital subcarriers are employed for training ANN cores for intra-subcarrier self-phase modulation (iSPM) and inter-subcarrier cross-phase modulation (iXPM) nonlinear impairments. Starting with a fully-connected network across all subcarriers, we move toward upgrading the design with modular ANN cores and sequential training stages. In other words, we start with black-box ANN models and then propose more efficient and flexible modular designs inspired by nonlinear perturbation analysis. All models in here are universal from ANN-core choice perspective. Specifically, we choose the building block for all the proposed structures in this work to be an ANN core with combinations of CNN and LSTM layers. One important aspect in this work is to generalize the neural network designs such that a block of data is generated in the equalization since parallelization is an essential feature of the coherent modems. We explore parallelization of these designs and impact of block-processing on performance versus complexity tradeoffs for these models. We show that one can get orders of magnitude reduction in computational complexity by moving towards block equalization in this fashion.
The remainder of this paper is organized as follows: In Section 2, the base of nonlinear compensation for fiber channel is briefly discussed. In Section 3, the multi-purpose ANN-core structure as the building block of proposed models is explained. The details of various ANN structures for NLC in DSCM are presented Section 4 while Section 5 is devoted to the numerical setup and results comparison. Next, impact of dispersion map on the nonlinear equalizer designs is discussed in Section 6. Finally, we conclude the paper in Section 7.
## 2 Nonlinear Compensation for Optical Fiber Channel
The dual-polarization evolution of optical field over a fiber link can be explained by the Manakov equation [12] where the linear and nonlinear propagation impacts are described as follows:
\[\frac{\partial u_{x/y}}{\partial z}+\frac{\alpha}{2}u_{x/y}+j\frac{\beta}{2} \frac{\partial^{2}u_{x/y}}{\partial t^{2}}=j\frac{8}{\Im}\gamma\left[\left|u_ {x}\right|^{2}+\left|u_{y}\right|^{2}\right]u_{x/y}, \tag{1}\]
where \(u_{x/y}=u_{x/y}(t,z)\) represents the optical filed of polarization \(x\) and \(y\), respectively, \(\alpha\) is the attenuation coefficient, \(\beta\) is the group velocity dispersion (GVD), and \(\gamma\) is the nonlinear coefficient. The nonlinear interference can be equalized by approximating and inversing the above equation through DBP [1, 2, 3] where the fiber is modeled as a series of linear and nonlinear sections through first-order approximation of Manakov equation. On the other hand, by employing the perturbation analysis [4], one can represent the optical field as the solution of linear propagation plus a perturbation term from the nonlinear impact in symbol domain and in one step for the accumulated nonlinearities. It is shown that the first-order perturbation term can be modeled by the weighted sum of triplets of transmitted symbols plus a constant phase rotation [5, 13].
A wide variety of machine learning solutions for fiber nonlinearity compensation in optical communications has been proposed in literature ([6, 7, 8, 10] among others). These solutions are generally benchmarked against conventional solutions such as DBP and PNLC. Considering the lumped nonlinear compensation methods, a block diagram for the equalization module is presented in Fig. 1 where the pre-processing buffer generates appropriate inputs for a given method. Specifically, it includes a module that calculates appropriate PNLC triplets for regular perturbation-based method or an artificial neural network nonlinear compensation (ANN-NLC) approach that operates on externally generated triplet features [6]. In ANN-NLC solutions that
directly operate on Rx-DSP outputs [14, 15, 8, 10], this pre-processing buffer is tasked to provide an extended block of soft symbols needed to efficiently equalize the nonlinear interference.
Considering the first-order perturbation as the dominant nonlinear term, an appropriate scaling can be employed to adapt the nonlinear error estimates in case the training and inference stages are performed at different optical launch powers:
\[\alpha=10^{\left(P_{\text{inference}}(\text{dB})-P_{\text{train}}(\text{dB}) \right)/10}. \tag{2}\]
where \(P_{\text{train}}\) is the optical launch power of the data used in the training while \(P_{\text{inference}}\) is the respective optical launch power of data in the inference (equalization) stage.
In this work, we consider various lumped ANN structures for modeling and compensation of fiber nonlinearity for DSCM systems. We provide the evolutionary approach of designing advanced ANN models that do not rely on external features (such as triplets) as input. Hence, by using symbols in a delay-line format as the input, the model learns relevant features according to its structures through additional layers. The proposed ANN-NLC equalizers estimate the nonlinear distortions of each subcarrier in one polarization of a DSCM signal given the relevant information from all digital subcarriers across both polarizations. Due to the nature of signal propagation in fiber and symmetries in the medium, it has been shown that the same model can be used to generate nonlinear error estimates for other polarization by simply swapping input signals to their respective counterpart from the first polarization. This alleviates the need to train separate models for X and Y polarization and enables efficient learning of a generalized model.
## 3 Multi-Purpose ANN-Core Structure
The presented ANN networks here mainly explore different higher level structures that aim to exploit the interaction between each target digital subcarrier and the neighboring ones in search for more powerful and efficient models. Hence, the models are universal from ANN-core choice perspective. Specifically, we choose the building block for all the proposed models in this work to be an ANN core comprising a combination of CNN and LSTM cores. In particular, we employ LSTM units that has been shown to have promising capability in efficient learning of complicated dynamic nonlinear systems. The first layer is a 1-dimensional CNN followed by Leaky ReLU activation function that is tasked with helping in feature generation. CNN features are fed into a LSTM module with bi-directional structure to extract time-dependency of the input features.
To save in computational resources for equalization, one can share the computational overhead corresponding to initialization of each LSTM chain by expanding the input and output sequences and provide estimates for multiple time instances. LSTMs are highly suitable to reduce the processing overhead of a sequential input stream since they aim to capture the most relevant representations of the past observed inputs in form of the hidden states. These hidden state variables are updated as new inputs are processed sequentially. However, the output remains an explicit function of the inputs and hidden state variables at every time instance. Consequently,
Figure 1: Block diagram for lumped perturbation-based nonlinear compensation.
equalization of any extra input only increases the total computations by one extra RNN process step. To leverage this capability, simplify training, and to avoid the challenges from long back-propagation in LSTMs, these neural networks are trained with the regular symbol-based processing while the block-processing is employed during the deployment and evaluation stage. Note that by using the block-processing in equalization path, we bring an approximation into the network that we have trained with different initial hidden states. However, with long enough training block size and filter-tap in LSTM, one can show that the change in the states are minimal [15]. This is reflected in the complexity figures as we deploy trained models with different block sizes \(N\) in the numerical results.
A block diagram for the proposed ANN equalization core is depicted in Fig. 2. The LSTM network has been trained using a fixed sequence of features corresponding to \(2k+1\) time instances, where \(k\) is the filter-tap size on each side of the target symbol. In the equalization path, we deploy the same network over input feature sequences corresponding to \(2k+N\) time instances to obtain output features associated to the symbols in the middle \(N\) time instances. In this case, input features corresponding to the first \(k+N\) time-instances \(i\in\{-k+1,\ldots,N\}\) are sequentially fed into a forward LSTM unit initialized with zero memory, producing output features and evolving the internal memory states. A similar, backward LSTM unit starts with zero memory and evolves using the CNN features corresponding to the last \(k+N\) time instances of the \(2k+N\) window \(i\in\{1,\ldots,N+k\}\) in the opposite direction. Outputs of forward and backward LSTM modules for middle \(N\) time instances are concatenated to form the LSTM block outputs. Finally, the LSTM block outputs may pass through a linear or a multi-layer perceptrons (MLP) stage with Leaky ReLU activation functions (for all but the last layer) that ultimately provides estimations for real and imaginary part of nonlinear interference per output. Note that, as we discuss further in section 4, the final MLP layer can be separated from the ANN core and trained individually in some architectures.
In order to get a measure of complexity for the multi-purpose ANN core, we consider a CNN-LSTM network with an MLP output layer. Let us consider equalization of \(N\) symbols with processing window of \(N_{w}=2t+N\). The real multiplication per symbol (RM) for CNN is equal
Figure 2: Multi-purpose ANN-Core structure for the equalization path.
to:
\[CNN_{RM}=\frac{4N_{f}\,N_{ke}(N_{w}-N_{ke}+1)}{N}, \tag{3}\]
where \(N_{f}\) is the number of filter, \(N_{ke}\) is the kernel size for four input channels corresponding to in-phase and quadrature symbols for X and Y polarizations. In case information from multiple subcarriers are fed as input to the ANN core \(N_{f}\) should be scaled accordingly. The convolutional layer is assumed to have zero padding with single stride and dilation. For LSTM network, consider the input sequence length for each direction as \(N_{s}=k+N\) where \(k=t-(N_{ke}-1)/2\) is the extra symbol length at each side of LSTM input. In this case, the combined RMs for forward and backward LSTM is given by:
\[LSTM_{RM}=\frac{2N_{s}N_{h}(4(N_{f}+N_{h})+3)}{N}, \tag{4}\]
where \(N_{h}\) is the hidden size. Finally, for an MLP with single hidden layer at the output of LSTM network, the RM is described by:
\[MLP_{RM}=n_{m}.2N_{h}+2n_{m}. \tag{5}\]
where \(n_{m}\) is the hidden layer size. In case MLP contains more than one hidden layer, extra multiplications should be added, accordingly. Furthermore, in the absence of any hidden layer, \(MLP_{RM}=4N_{h}\) where 4 is the multiplication of 2 directions of LSTM and 2 outputs in I and Q for each output symbol.
Note that to obtain complexity for each structure in Section 4, we need to calculate and accumulate the RMs associated to ANN cores in the equalization path for all subcarriers. Thus, we mainly use the number of real multiplications per super-symbol (RMpS) as the complexity metric for each realization of an architecture. Note that super-symbol denotes the combined output symbols for all digital subcarriers across one polarization at each time instance. While we limit the scope of this paper to a DSCM system with four sub-carriers, this metric enables us to further compare the results with other single carrier and DSCM transmission systems that operate on the similar baudrate and tailored for the same throughput in future studies.
## 4 ANN Structures for NLC in DSCM
### Common-Core (CC)
First structure for joint NLC in DSCM is a fully-connected black-box approach that contains only one ANN core. This single ANN core is tasked to provide nonlinear distortion estimates for all subcarriers of one polarization using a window of received symbols from all subcarriers in both polarizations (as depicted in Fig. 3a). Note that employing CC model which lacks any enforced structure that separates the iSPM and iXPM nonlinear contributions could be seen as a double-edge sword. In one hand, this increases the number of training parameters compared to a specialized physics-informed ANN where a predetermined structure is enforced on the ANN architecture. On the other hand, by lack of adding any structure on the construct of the network, we may allow maximum entanglement of iSPM and iXPM features through different layers of the ANN core. This can potentially lead to higher efficiency by allowing the network to avoid duplicating terms that could be shared in the absence of a single and fully-connected structure. However, there is always the possibility that the ANN core structure may not be inherently powerful enough for the underlying nonlinear mechanism to learn all the appropriate features even by allowing higher complexity realizations. This could severely limit the performance, especially in the absence of adequate training data and defeat this purpose.
### Separate-Core-per-Band (SC)
In order to obtain a subcarrier-based structure and parallelize the model, we allocate a separate ANN core to estimate the nonlinear distortion for each subcarrier output. Note that similar to CC, ANN cores in SC still operate on input information from all subcarriers. This design is illustrated in Fig. (b)b. The motivation here is to employ separate and smaller cores per subcarrier in order to be more effective in fine-tuning the model parameters. This is important since inner and outer subcarriers may experience different balances of iSPM and iXPM nonlinear distortions. Also, in terms of flexibility, in case there are inactive subcarriers due to the network throughput demands, such as hitless capacity upgrades or in P2MP scenarios, the parallel design in SC could be more efficiently deployed compared to the single connected core architecture of CC. However, one potential drawback for this structure is that utility sharing between equalization paths of different subcarriers is prevented.
### Modular-l (M1)
We move on from the black-box approach to design more efficient and flexible ANN-NLC models by using the insights from perturbation analysis of fiber nonlinear propagation. Specifically, the underlying mathematics behind iSPM triplet coefficients in perturbation analysis weakly relies on the absolute position of a subcarrier in the spectrum [4]1. Also, iXPM nonlinearity mechanism relies on the relative position of target and interfering subcarriers. Hence, only a small set of iSPM and iXPM cores can be trained and multiple instances of these trained cores get deployed as needed in the equalization path. Furthermore, since iXPM contributions are more pronounced among neighboring subcarriers, smaller and more efficient networks can be deployed by involving only iXPM contributions of immediate neighboring subcarriers. Fig. (a)a illustrates a set of ANN cores that can be trained for M1 design where one iSPM and four iXPM cores are trained to model the intra- and inter-subcarrier nonlinearities for up to two neighboring subcarriers from each side. Note that the input to an iSPM core is a window of the target subcarrier symbols while the iXPM cores employ symbols from both target and interfering subcarriers.
Footnote 1: This statement is technically accurate in the absence of higher order linear distortion terms such as dispersion slope, but could be sufficiently accurate even in presence of such terms in practical systems.
Let us look at an implementation of M1 model that considers iXPM nonlinearities of up to two neighboring interfering subcarriers from each side of every output sub
Figure 3: ANN-NLC structures: (a) Common-Core, and (b) Separate-Core-per-Band
for this modular NL equalizer is depicted in Fig. 3(b) where four iSPM cores compensate self nonlinearities originated from each subcarrier. Moreover, two inner and outer subcarrier pairs additionally employ three and two iXPM cores, respectively. Note that ANN cores with similar color share same layouts and weights leading to more efficient training specifically with limited data. Provided that channel parameters and subcarrier bandwidth and spacing remain the same, additional cores with learned weights and biases from this example can be deployed for systems with higher number of subcarriers. With a proper training strategy, the proposed structure allows us to separate iSPM and iXPM contributions and informatively direct computational resources to the best route. This is evident in the numerical results where we explore moving beyond iSPM compensation for various modular designs.
Another advantage of this modular design can be seen in certain scenarios, such as hitless capacity upgrades or in P2MP scenarios, wherein certain subcarriers could be turned-off. In this case, SC and specially CC models trained with all subcarriers may not be efficiently utilized as the statistics of inputs to ANN core(s) for to the deactivated subcarrier(s) would be vastly different from the training. Additionally, it would be almost impossible to effectively identify and disable routes within ANN that correspond to the absent subcarriers to save power or reduce penalty. However, a modular design can be readily reconfigured to accommodate such scenarios
Figure 4: Structural design of ANN-NLC using Modular-I. (a) illustrates the trained cores and (b) illustrates the implementation for a 4-subcarrier system.
by deactivating the equalization paths corresponding to absent subcarriers, leading to a flexible and power-efficient deployment.
### Modular-ll (M2)
The next step in the evolution of ANN-NLC for DSCM is rooted in two observations. First, the perturbation analysis [13, 16, 5] suggests that the iXPM perturbation coefficients \(C_{m,n}^{(-\ell)}\) governing the interaction of subcarrier \(i\) and its \(\ell\)'s neighbor on right \(i+\ell\) are similar to those of subcarrier \(i\) and its \(\ell\)'s neighbor on left \(i-\ell\), provided that we employ a simple transformation, i.e.,
\[C_{m,n}^{(-\ell)}=C_{-m,n}^{(\ell)}\,^{*}. \tag{6}\]
where \(m\) and \(n\) are the symbol indices. Additionally, since these perturbation coefficients mainly rely on the relative position of subcarriers, the similarity can be extended to subcarrier \(i+\ell\) and its \(\ell\)'s neighbor on the left, \(i\). Note that the iXPM(\(+\ell\)) and iXPM(\(-\ell\)) cores in M1 for \(i\) and \(i+\ell\) subcarriers, respectively, are solely fed by inputs from these two subcarriers. This hints to potential computational savings by merging iXPM(\(+\ell\)) and iXPM(\(-\ell\)) cores in M1 that operate on same subcarriers into a super core iXPM(\(\pm\ell\)) and potentially obtain a more efficient structure that preserve similar performance levels with a lower complexity.
The output features of these super-cores along with the appropriate iSPM features are passed to separate MLP modules prior to aggregation for each subcarrier. Note that MLP layers are detached from the ANN cores in this design and a set of \(2\ell+1\) MLP modules are trained in this approach to model integration of iSPM features and up to \(2\ell\) iXPM core features involving neighboring subcarriers. The trained MLP modules are appropriately instantiated in the inference path for each subcarrier.
In summary, potential performance versus complexity tradeoff advantages of M2 could be of two fold. First, merging cores that are believed to contain significant amount of shared computations for feature generation can increase model efficiency. Second, reducing the distinct parameters of a network by replicating trained modules can greatly improve training efficiency and result in more generalized models. Also as mentioned before, the modular design provides
Fig. 5: Modular-II design for DSCM ANN-NLC.
additional flexibility in crafting more intelligent solutions for different network operational scenarios. Fig. 5 shows a block diagram for this model with four subcarriers and \(\ell=2\).
## 5 Numerical Results
### System Model
The simulation setup includes typical Tx, channel, and Rx modules for a DSCM transmission scenario. To focus on fiber nonlinearity, we consider ideal electrical components and Mach-Zhender modulator. Also, DACs/ADCs are ideal with no quantization or clipping effects. The dual polarization fiber channel is modeled by split-step Fourier method [17] with adaptive step-size and maximum nonlinear phase-rotation of 0.05 degree to ensure sufficient accuracy. At the Rx side, the sequence output from carrier recovery (CR) are used to train and evaluate the nonlinear equalizer. Standard DSP algorithms are employed for detection and processing of the received signal at the Rx. The block diagram for such system is depicted in Fig. 6. Note that, to keep the ability of conventional coherent receiver for phase correction under correlated phase-noise (which is coming from nonlinear propagation in our case), we deployed the carrier recovery before ANN-NLC module. This ensures that the linear equalization provides nonlinear phase compensation capability of a coherent receiver without a dedicated NLC equalizer. Hence, the neural network compensation gain is given on top of the best linear performance.
To evaluate and optimize different algorithms, we focus on a single-channel DSCM system operating at 32 Gbaud with four subcarriers and uniform 16QAM modulation format. The signal at each subcarrier is digitally generated using root-raised cosine pulse shape with a roll-off factor of \(1/16\). The link consists of 40 spans of standard single-mode fiber of 80km length, followed by optical amplifiers with \(NF=6\) dB noise figure. Furthermore, for the most of numerical results we consider a symmetric dispersion map, in which 50% of total dispersion is digitally pre-compensated at the transmitter side. This in turn allows us to simplify the diagrams and avoid unnecessary complications at this stage. Section 6 is devoted to extension of this design to other dispersion maps where we provide ANN-NLC structures optimized for a post-CDC scenario. The training and evaluation of models are performed using at 2 dBm launch power. This is close to the optimal launch power when DBP at 2 Sa/sym and 1 and 2 steps-per-span are employed to benchmark these results. Note that for this setup, a Q-factor of \(Q=7.88\) dB can be obtained at the optimal launch power of 1 dBm in the absence of a fiber-nonlinearity compensation.
### ANN Optimization Workflow
All the models here are trained and evaluated based on the simulation data using \(2^{18}\) symbols per digital subcarrier. The training and evaluation data are generated from pseudo-random streams and different generator seeds using permuted congruential generator (PCG64). Also, 20% of the training dataset was set aside for validation of the model during the training
Figure 6: System model for DSCM system.
process. Root minimum squared error (RMSE) between the model outputs and the difference between transmitted symbols and the received values constitutes the loss, which is used in the back-propagation process to update the model coefficients. All models were trained using Adam optimizer with learning-rate = 0.001 for at least 200 epochs, unless terminated by the early-stopping mechanism that tracks the validation loss and prevents over-fitting. We mainly used mini-batches of length 512 in obtaining these results. Minor performance differences were observed by exploring mini-batch sizes as low as 128, and as high as 2048 provided that learning-rate and number of epochs were optimized accordingly. Additionally, we employed a learning-rate scheduler that reduces the learning-rate by 20% when loss stops reducing for 10 epochs. For each model, best coefficients associated to the least validation loss across all training epochs were saved at the end of the training stage.
In order to explore performance versus complexity tradeoff, more than a thousand models for each design are trained and tested in this work for different block-sizes. Table 1 illustrates the list of the ANN-Core hyper-parameters and their sweeping ranges. The sweeping resolution of each parameter within each participating ANN core are individually adjusted for each model structure. We use the scatter plots reflecting the performance-complexity of different realizations of each model based on a common test dataset obtained from a separate transmission simulation using noise and bit sequences of different random-number generator algorithms and seeds. The envelope associated to be best performing models at various complexity constraints are generated in order to compare different architectures.
### Numerical Results Comparison
In this part, we provide performance versus complexity tradeoff comparison of various optimized ANN equalizers for different block-sizes. Fig. 7 illustrates the inference cost of various models in terms of RMPs. From ANN design point of view, it is important to efficiently allocate additional complexity in order to improve performance since majority of the models demonstrate subpar efficiency. As an example, increasing hidden size of LSTM may not be an efficient strategy to improve the performance if filter-tap size \(k\) is not large enough to capture the nonlinear memory.
It can be seen that using a separate ANN core per each subcarrier did not significantly change the outcome of SC compared to CC. Their best performance remains around 8.8 dB. The performance-complexity tradeoffs for these models remain very similar for different block sizes. One can clearly observe various advantages of modular solutions compared to the black-box approaches represented by CC and SC. Both modular solutions offer clear superiority in both low and high complexity regions while M2 structure, specifically, demonstrates superior performance complexity trade-off across all complexity regions among all structures. Note that the performance for iSPM compensation is capped around 8.6 dB. Employing additional cores to compensate for
\begin{table}
\begin{tabular}{c c c} \hline \hline Layer & Learn-able Parameters & value / sweep range \\ \hline CNN & \(num\_layers\) & 1 \\ & \(num\_output\_channels\) & [10:200] \\ & \(kernel\_size\) & [5:30] \\ LSTM & \(num\_hidden\_state\) & [10:300] \\ & \(num\_output\_features\) & [10:300] \\ MLP & \(num\_hidden\_layers\) & [0:2] \\ & \(layer\_size\) & [10:100] \\ \hline \end{tabular}
\end{table}
Table 1: List of hyper-parameters for ANN core that operates on a sequence length \(T=2t+1\) with \(t\in[5:40]\).
intra-subcarrier nonlinearities due to immediate neighboring subcarriers \(\ell=1\) from each side (iXPM1) can significantly increase the maximum performance to around 9 dB, unlocking 0.4 dB gain compared to the iSPM compensation at 2 dBm launch power. We further explored another scenario by incorporating iXPM contributions of two subcarriers from each side. However, the results are omitted as we did not observe a meaningful additional performance gain for this scenario. This result can be corroborated by findings in Section 6, where we demonstrate the perturbation coefficients corresponding to the iXPM contributions of the second neighbors for this setup showing that the magnitude of these coefficients are around 10dB lower than iXPM contributions from the immediate neighbors.
Note that the best performance obtained from modular solutions are generally 0.2 dB higher than CC and SC. This suggests that these solutions can more efficiently learn from a limited training data due to a more generalized structure with fewer trainable parameters. The trade-off between performance and complexity in mid-tier performance regions with Q around 8.6 dB is particularly noteworthy where non-modular designs can compete with M1. Note that this region is the onset of switching away from iSPM only NLC to incorporate iXPM nonlinearities from immediate neighboring subcarriers. This suggests that CC and SC architectures can converge to a moderately efficient structures by internally sharing resources of iXPM compensation between neighboring subcarriers. This type of resource sharing is one of the main distinctive features of M2 model compared to M1 that reflects in its superior efficiency in this region.
In order to demonstrate advantages of block-processing, performance versus complexity evaluations for different block-sizes are illustrated in Fig. 8 for M2 model as an example. A substantial complexity reduction for a very minimal performance loss can be obtained by parallelization of the trained ANN core and deploy the solution with a block-size \(N>1\) provided that the model is sufficiently generalized in the training stage. In high-performance region
Figure 7: A comparison of performance as a function of RMpS amongst different explored ANN-NLC solutions for DSCM.
(\(Q>8.8\)), we can achieve a complexity reduction by a factor of 20 for \(N=1024\). However, the complexity advantages shrinks in lower performance regions (ex. factor of 5 for \(Q\sim 8.4\)) where best models generally have lower filter-tap size and incorporate less nonlinear memory.
Next, performance envelopes for all models as a function of the number of training parameters are depicted in Fig. 9. The number of training parameters is related to the memory requirements to store and retrieve model parameters as link configuration is modified over-time. This metric can also measure the efficiency of a model to provide a certain performance level with the least independent parameters which is also closely tied with generalization of the ANN. For a mid-tier performance of around 8.6 dB, the modular solutions generally demonstrate approximately 2 to 4 times lower number of parameters compared to CC and SC. Note that CC and SC solutions supposedly have access to all subcarrier information and are not limited to iSPM+iXPM1 architectures of M1 and M2. However, this assumed _advantage_ results in a significant loss for CC and SC solutions if the number of training parameters are below 40,000. We attempted to close this performance gap by increasing the number of epoches for non-modular solutions and further optimizing the learning rates without much success. This may indicate that practical ANN design in presence of various limitations and constraints for this problem is far from a plug and play approach and requires careful design using insights from the physical model.
Figure 8: Impact of block-size on performance vs. complexity of the best M2 models.
Figure 9: A comparison of performance as a function of number of training parameters amongst different explored ANN-NLC solutions for DSCM.
Finally, we explore the applicability of proposed models on similar links with different optical launch powers. Fig. 10 illustrates the performance as a function of optical launch power where ultiple graphs are presented for best models obtained with different complexity budget constraints. As stated earlier, all models where trained at 2 dBm optical launch power. Note that selected models from all structures demonstrate good generalization and can provide nonlinear performance gain on a wide range of launch powers, spanning from linear regime to deep nonlinearity. We provide DBP performance plots with different number of steps per span (StPS) to benchmark proposed ANN-NLC structures. Note that, the complexity comparison with other NLC methods such as DBP is not performed here since a fair comparison requires development of efficient hardware-friendly versions of ANNs after model compression, pruning, and weight quantization which is beyond the scope of this paper.
## 6 Impact of Dispersion Map
So far, we have shown application of ANN-NLC equalizers in transmission scenarios with symmetric dispersion map. As depicted in Fig. 11, the windows for symbols of interest from target and interfering subcarriers for iSPM and iXPM triplet features for a symmetric dispersion map are symmetric around reference symbols. This is the main reason that symmetric windows of soft values are selected as input to iSPM an iXPM cores in the previous designs. However, in presence of an asymmetric dispersion map, such as post dispersion compensation, the regions for iXPM features of most significance are neither symmetric nor centered around the reference symbol from interfering subcarrier as shown in Fig. 12. Hence, one needs to adjust the input features for each iXPM core according to the dispersion-induced group-delay between the involved subcarriers. Another approach is to introduce delay lines in the input and output of the
Figure 10: Performance of different ANN-NLC solutions as a function of optical launch power for given complexity constraint budgets.
ANN equalizer and maintain a symmetric input windows for the ANN cores. Specifically, to ensure proper operation of the equalizer in this case, we introduce a progressive delay amounting to half of the dispersion-induced group delay between subcarriers prior to the ANN equalization. To reverse this impact, another delay-line is added at the out of the ANN equalizer. Note that the window size for each iXPM core needs to be as large as the maximum group-delay between associated subcarriers. This ensures symbols that impacted the target symbol are appropriately involved. Fig. 13 illustrates a block diagram for this solution.
We have modified the simulation setup to provide a performance comparison of selective ANN equalizers between symmetric- and post-CDC in Fig. 14. Similar trends are observed for CC and M2 solutions with post-CDC, showing the applicability and effectiveness of the proposed solution. Note that similar performance gains are achieved by switching from iSPM
Figure 11: Magnitude of iSPM (\(\ell=0\)) and iXPM (\(\ell\neq 0\)) perturbation coefficients \(C_{m,n}^{(\ell)}\) for the DSCM simulation setup with sym-CDC: (a) \(\ell=-2\), (b) \(\ell=-1\), (c) \(\ell=0\), (d) \(\ell=1\), (e) \(\ell=2\).
Figure 13: Delay adjustment for post-CDC dispersion map.
to iSPM+iXPM1 nonlinear equalization for these schemes. Additionally, we observe that the complexity of all NLC solutions with post-CDC is higher than their respective counterpart with symmetric CDC for a given performance level. This can be attributed to the larger memory of iSPM and iXPM nonlinearities in the link with post-CDC. This is corroborated by comparing the domain and magnitude of perturbation coefficients presented in Fig. 11 and Fig. 12.
## 7 Conclusion
In this work, we studied different ANN approaches in compensation of intra-channel nonlinearities in DSCM systems. By training and evaluating various models over a comprehensive grid of parameters, we explored performance versus complexity tradeoff of each approach and discussed their scalability, potentials and weaknesses. Starting from back-box approaches in designing ANN models, we gradually moved towards modular designs inspired by perturbation analysis of fiber nonlinearity. This approach proved to be more efficient in training and producing better models with a given training data, as well as inference complexity and model storage requirements. We further demonstrate a pragmatic approach to adapt the proposed solutions to links with asymmetric dispersion maps. While these networks were exclusively designed for fiber nonlinearity compensation, a similar approach can be further studied in the context of component nonlinearity compensation in DSCM systems.
Note that all these designs can be further optimized by looking at other avenues. Notable approaches such as weight pruning, quantization and also future extension to quantization-aware training in form of quantized and binary neural networks can be explored to drastically reduce complexity of these models. Despite this, we believe that our presented study provides a fair
Figure 14: Comparison on the impact of dispersion map on effectiveness of ANN-NLC using envelope associated to best performing models at different block-sizes.
comparison and good starting step towards that path by focusing on the macro design of the ANN equalizers tailored to the characteristics of the fiber nonlinearity distortion mechanism in multi-subcarrier systems.
|
2308.02985 | Introducing Feature Attention Module on Convolutional Neural Network for
Diabetic Retinopathy Detection | Diabetic retinopathy (DR) is a leading cause of blindness among diabetic
patients. Deep learning models have shown promising results in automating the
detection of DR. In the present work, we propose a new methodology that
integrates a feature attention module with a pretrained VGG19 convolutional
neural network (CNN) for more accurate DR detection. Here, the pretrained net
is fine-tuned with the proposed feature attention block. The proposed module
aims to leverage the complementary information from various regions of fundus
images to enhance the discriminative power of the CNN. The said feature
attention module incorporates an attention mechanism which selectively
highlights salient features from images and fuses them with the original input.
The simultaneous learning of attention weights for the features and thereupon
the combination of attention-modulated features within the feature attention
block facilitates the network's ability to focus on relevant information while
reducing the impact of noisy or irrelevant features. Performance of the
proposed method has been evaluated on a widely used dataset for diabetic
retinopathy classification e.g., the APTOS (Asia Pacific Tele-Ophthalmology
Society) DR Dataset. Results are compared with/without attention module, as
well as with other state-of-the-art approaches. Results confirm that the
introduction of the fusion module (fusing of feature attention module with CNN)
improves the accuracy of DR detection achieving an accuracy of 95.70%. | Susmita Ghosh, Abhiroop Chatterjee | 2023-08-06T01:52:46Z | http://arxiv.org/abs/2308.02985v1 | Introducing Feature Attention Module on Convolutional Neural Network for Diabetic Retinopathy Detection
###### Abstract
Diabetic retinopathy (DR) is a leading cause of blindness among diabetic patients. Deep learning models have shown promising results in automating the detection of DR. In the present work, we propose a new methodology that integrates a feature attention module with a pretrained VGG19 convolutional neural network (CNN) for more accurate DR detection. Here, the pretrained net is fine-tuned with the proposed feature attention block. The proposed module aims to leverage the complementary information from various regions of fundus images to enhance the discriminative power of the CNN. The said feature attention module incorporates an attention mechanism which selectively highlights salient features from images and fuses them with the original input. The simultaneous learning of attention weights for the features and thereupon the combination of attention-modulated features within the feature attention block facilitates the network's ability to focus on relevant information while reducing the impact of noisy or irrelevant features. Performance of the proposed method has been evaluated on a widely used dataset for diabetic retinopathy classification e.g., the APTOS (Asia Pacific Tele-Ophthalmology Society) DR Dataset. Results are compared with/without attention module, as well as with other state-of-the-art approaches. Results confirm that the introduction of the fusion module (fusing of feature attention module with CNN) improves the accuracy of DR detection achieving an accuracy of 95.70%.
Diabetic Retinopathy, CNN, VGG19, APTOS
## I Introduction
Diabetic retinopathy (DR) is a major cause of blindness among working-age adults worldwide. Early and accurate detection of DR is crucial for timely intervention and effective management of the disease. Researchers have used various techniques such as neural networks, fuzzy sets, nature inspired computing with an aim to enhance the accuracy of object tracking, object segmentation, object detection tasks making it relevant for computer vision applications [1, 2, 3, 4]. Several researchers contributed in the field of medical image analysis for improved disease diagnosis and detection. In recent years, deep neural nets [5] have shown remarkable advancements in various computer vision tasks, including medical image analysis. In this article, we focus on DR detection using deep learning, with specific emphasis on the integration of a novel feature attention block with a pretrained VGG19. Detecting DR involves the analysis of retinal fundus images which can provide valuable insights into the progression of the disease.
Traditional approaches to DR detection relied on handcrafted features and shallow classifiers, which often struggled to capture the intricate patterns and subtle characteristics indicative of diabetic retinopathy. However, deep learning models (Fig. 1), with their ability to automatically learn features from raw data, demonstrated great potential in improving the accuracy of DR detection.
In the present work, we propose a new feature attention block at a distinct position that enhances the discriminative power of the pretrained VGG19 architecture for DR detection. The attention block selectively amplifies the informative regions within the input image, allowing the network to focus on relevant features while suppressing irrelevant/noisy information. By incorporating this block after the VGG19 backbone, the model's ability to extract and emphasize on crucial features increases.
As mentioned, the primary objective of this article is to design a neural network model that could provide better DR detection accuracy. To do so, the performance of our proposed method is evaluated with those of the baseline VGG19 architecture without the feature attention block. We have considered the popularly used APTOS 2019 [7] Blindness Detection Challenge dataset. Results are compared in terms of various performance metrics. Results establish the superiority of the newly designed model in comparison to seven other state-of-the-art techniques.
The remainder of this paper is organized as follows: Section 2 provides a review of related works in the field of DR detection using deep learning. Section 3 presents the methodology, including a detailed description of the proposed feature attention block, its integration with the VGG19 architecture, and the transfer learning technique used. Section 4 discusses the experimental setup mentioning dataset used, evaluation metrics considered and details of parameters taken. Analysis of results has
Fig. 1: General representation of a deep neural network [6]
been put in Section 5. Finally, Section 6 concludes the paper.
By introducing a new feature attention block to enhance the performance of pretrained models on the APTOS dataset, our research contributes to the ongoing efforts in improving the early detection of diabetic retinopathy.
## II Related Research
Gulshan et al. [8] and Abramoff et al. [9] proposed CNN models for automated screening of diabetic retinopathy. They trained a deep neural network using a dataset of retinal fundus images. The CNN architecture consisted of multiple convolutional layers followed by max pooling layers to extract features from images. The extracted features were then passed through fully connected layers for classification. Quellec et al. [10] introduced a joint segmentation and classification approach for DR lesions. They used a model that combined CNNs with a conditional random field framework, performing segmentation and classification simultaneously.
Burlina et al. [11] focused on the classification of age-related macular degeneration (AMD) severity, a condition related to DR. They used a deep model called _DeepSeeNet_, which employed a CNN architecture trained on a large dataset of retinal images. Chaudhuri et al. [12] provided a comprehensive analysis of deep learning models for DR detection. They explored various CNN architectures, including VGGNet, Inception-v3, and ResNet. The authors compared the performance of these models and discussed their strengths and limitations. Gargeya and Leng [13] presented a review of deep learning-based approaches for DR screening. They discussed different CNN architectures including AlexNet, GoogLeNet, ResNet.
## III Methodology
In the present work, we propose a neural network model for DR detection using deep learning specifically focusing on the integration of a new feature attention block with a pretrained VGG19 architecture. Block diagram of the proposed method is shown in Fig. 2.
As shown in Fig. 2, the first layer applied is the Weighted Global Average Pooling which takes the features obtained through the pretrained VGG19 as input. It computes the weighted average of the feature maps along the spatial dimensions, resulting in a tensor with reduced spatial dimension _(batch_size, 1, 1, channels)_. This layer focuses on capturing the importance of each channel based on weighted average.
Thereafter, two dense layers are employed. The first one uses a rectified linear unit (ReLU) activation function reducing the number of channels to _chan_dim/ratio_. This layer introduces non-linearity and compresses the channel dimension. The resulting tensor has a shape _(batch_size, 1, 1, chan_dim/ratio)_. The second dense layer utilizes a sigmoid activation function to produce an output tensor with the same shape as that of the first one. This layer focuses on determining the channel-wise importance through a sigmoidal activation that assigns attention weights to each channel. These attention weights (importance) are applied to feature maps.
To incorporate the computed attention weights, the input feature tensor is multiplied element-wise with the output tensor obtained from the second dense layer. The resulting tensor retains the original spatial dimensions of the input feature while emphasizing important channel activations.
Its shape is _(batch_size, height, width, chan_dim)_.
To preserve original information and facilitate gradient flow, a skip connection is incorporated to add the attention feature tensor obtained from the element-wise multiplication to the input feature tensor. This process forms a residual connection, ensuring that the original information is retained while incorporating channel attention. The output tensor has the same shape as that of the input feature, i.e., _(batch_size, height, width, chan_dim)_.
### Feature Attention Mechanism:
The new feature attention block, along with skip connection, leverages the power of weighted global average pooling to further capture the importance of each channel in a feature map while preserving valuable channel-wise details. By calculating the weighted average activation of each channel across spatial dimensions, the technique effectively condenses spatial information.
The skip connection is vital for information flow and preserves important details throughout the network. This is achieved through the addition of the attention feature tensor to the original input feature map.
By applying subsequent operations involving dense layers with ReLU and sigmoid activations, attention weights are generated to provide a measure of relevance/importance of each channel. As mentioned, these weights are then applied to the feature map using element-wise multiplication, dynamically amplifying the contribution of informative channels while attenuating the less important ones. This selective emphasis on important channels
Fig. 2: Block diagram of the feature attention module. The output dimensions are shown on the left side.
enables the model to focus on relevant features and extract discriminative information effectively.
The proposed methodology enhances the performance of DR detection through integration of transfer learning and the attention module. Transfer learning involves utilizing pretrained VGG19 net, (trained on ImageNet dataset), as feature extractor. This model captures powerful visual representations that are generalizable to various tasks. The attention module is augmented after the VGG19 layers (Fig. 3), selectively emphasizing important channels in the extracted features. Overall, the model benefits from both the learned representations and the specialized attention mechanism. The block diagram of the proposed methodology, augmented with feature attention module is shown in Fig. 3. Relevant working details are described below.
Let \(I\) be the input tensor of dimension _H\(\times\)W\(\times\)C_, where \(H\) and \(W\), respectively, represent the height and width of the channel (feature map), and \(C\) represents the number of input channels. The average pooling operation (_a_p_) is defined as:
\(a\_p=(\frac{1}{H\ x\ w})\sum_{x=1}^{H}\sum_{y=1}^{w}i\_f(i,j,k)\)
where, \(i\) ranges from \(1\) to _batch_size_, \(j\) ranges from \(1\) to height and \(k\) ranges from 1 to width of the input image, _i_f_ represents _input_feature_. The resulting tensor _a_p_ has a shape (_batch_size_, _1, _1, chan_dim_). _batch_size_ represents the number of images present in each batch during training.
The output of the first dense layer, _fc1_, is written as:
\[\small\begin{split} fc1\ (i,1,1,k)=ReLU(W1\ (e,d).a\_p(i,1,1,d)+b1(k)). \end{split} \tag{2}\]
This reduces the dimensionality of the input by multiplying the average-pooled features _avg_pool(i,1,1,d)_ with the corresponding weights _W1(k,d)_, summing them up, and adding the bias term _b1(k)_. The ReLU activation function is then applied to the sum. The resulting tensor _fc1_ has the shape (_batch_size_, _1, _1, chan_dim_).
Similarly, _fc2_ represents the output of the second dense layer (Eq. 3). It further reduces the dimensionality of the input by multiplying the features from the first dense layer _fc1(i,1,1,d)_ with the corresponding weights _W2(k,d)_, summing them up, and adding the bias term _b2(k)_. The sigmoid activation function is then applied to the sum. The resulting tensor _fc2_ has the shape (_batch_size_, _1, _1, chan_dim_).
\[\small\begin{split} fc2(i,1,1,k)\ sig(W2\ (e,d).fc1(i,1,1,d+b2(k)) \end{split} \tag{3}\]
In Eqs. (2) and (3), \(e\) refers to the index of the output feature maps in the dense layer. It ranges from \(1\) to the number of output channels (_chan_dim_); \(d\) refers to the index of the input feature maps in the dense layer. It ranges from \(1\) to the number of input channels (_C_).
The attention feature, _a_f_, is computed as,
\[\small\begin{split} a\_f(i,j,k)=fc2(i,1,1,k)*i\_f(i,j,k)\end{split} \tag{4}\]
Eq. 4 computes the element-wise multiplication between the features received from the second dense layer _fc2(i,1,1,k)_ and the input features _input_features(i,j,k)i.e. (i_f)_. This operation applies attention weights obtained from the second dense layer to each element of the input feature map.
Finally, the attention feature tensor (_a_f_) is added element-wise to the original input feature map, _i_f_, and is given as:
\[\small\begin{split} a\_f(i,j,k)=a\_f(i,j,k)+i\_f(i,j,k).\end{split} \tag{5}\]
### Fine-tuning the Proposed Model:
As stated earlier, the proposed methodology employs transfer learning by utilizing pretrained VGG19 model as feature extractor. The feature attention module is introduced to emphasize important channels within the extracted features. The modified features are then processed through dense layers for classification. By fine-tuning the pretrained model specifically for the task at hand, the model benefits from both the general visual representations learned from pretraining and the specialized attention mechanism.
## IV Experimental Setup
As stated, experiment with the proposed neural network model is conducted using APTOS dataset. Details of this experimental setup have been described below.
### (A) Dataset Used:
The APTOS dataset consists of a large collection of high-resolution retinal fundus images along with corresponding diagnostic labels provided by expert ophthalmologists. Table 1 shows the number of images used for different categories. There are five classes in the image dataset: Mild, Moderate, No DR, Proliferate, and Severe. A total of 6034 images were taken from 5 different classes. A sample image from each of the classes has been shown in Fig. 4.
Fig.3: Block diagram of the proposed methodology (augmented with feature attention block)
### (B) Image Preprocessing:
Each image is resized to 224x224 and normalized by dividing the pixel values by 255.
### (C) Performance Metrics Considered:
The models' performance is evaluated using metrics such as accuracy, precision, recall, F1-score, Top1 % error and loss values. Confusion matrix is also considered.
### (E) Model Training:
The dataset is split into training and test sets. The split is performed with a test size of 20% and stratified sampling to maintain class balance. During training, the models' weights are updated using backpropagation and gradient descent to minimize the loss function.
## V Analysis OF Results
To evaluate the effectiveness of the incorporation of the feature attention module, APTOS data is considered. Experimentation was done on NVIDIA A100 tensor core GPU. A total of 20 simulations have been performed and the average scores are depicted into Table. 3. From the table it is noticed that the results are promising in nature in terms of various performance indices, yielding 96% (rounded) accuracy for enhanced VGG19 (with Feature Attention Block, denoted as, _FAB_). Fig. 5 shows the variation in training and validation accuracy of VGG19 and VGG19+FAB. These curves indicate better performance when we add the new feature attention module with the pretrained model.
Likewise, Fig. 6 shows the variation in training and validation losses for VGG19 and VGG19+FAB. Fig. 6 confirms that introduction of the proposed feature attention module provides faster convergence with less loss.
The accuracy values obtained with/without FAB are depicted in Table 4. This table establishes that
\begin{table}
\begin{tabular}{|c|c|} \hline Metrics & VGG19 + FAB (rounded) \\ \hline Precision & 0.96 \\ Recall & 0.96 \\ F1-score & 0.96 \\ Accuracy (\%) & 96 \\ Top-1 error (\%) & 4.0 \\ \hline \end{tabular}
\end{table} TABLE III: Results obtained using APTOS 2019 dataset
Fig. 4: Images taken from APTOS dataset. (a) Mild, (b) Moderate, (c) No DR, (d) Proliferate, and (e) Severe
\begin{table}
\begin{tabular}{|c|c|} \hline Parameters & Values \\ \hline Learning Rate & 0.0001 \\ Batch Size & 16 \\ Max Epochs & 40 \\ Optimizer & Adam \\ Loss Function & Categorical Cross-entropy \\ \hline \end{tabular}
\end{table} TABLE II: Experimental setup
Fig. 5: Performance (accuracy curves) comparison between VGG19 and VGG19+FAB
\begin{table}
\begin{tabular}{|l|l|} \hline Neural Network Model & Accuracy (\%) \\ \hline VGG19 & 94.80 \\ VGG19 +FAB & **95.70** \\ \hline \end{tabular}
\end{table} TABLE IV: Accuracy values obtained with and without FAB for VGG19
Fig. 6: Performance (loss curves) comparison between VGG19 and VGG19+FAB
incorporation of the feature attention module has an edge over the baseline VGG19 achieving an accuracy of 95.70%.
For visual illustration, predictions made by our fine-tuned model are shown in Fig 7 for four different classes of sample images. The figures corroborate our earlier findings on the superiority of the proposed net.
The confusion matrix obtained for five different stages of DR using the proposed model (Fig. 8) indicates its efficacy for DR detection.
As stated, performance of the proposed model has also been compared with seven other state-of-the-art methods and the corresponding accuracy values are shown in Table 5. This table confirms the superiority of our proposed neural net model, augmented with attention block, for DR detection. Overall, the results demonstrate the efficacy of the incorporation of a feature attention module showcasing its potential in aiding early detection and diagnosis of DR.
## V Conclusion
The present work introduces a feature attention module in CNN for diabetic retinopathy detection. By fine-tuning the integrated feature attention block with a pretrained VGG19 model, we have achieved improved accuracy in identifying the severity levels of DR. The methodology leverages transfer learning from the VGG19. Additionally, the introduction of the channel attention block allows the model to selectively emphasize important channels, enhancing its ability to identify relevant features. Through experimentation and evaluation on the APTOS dataset, our proposed methodology demonstrated superior performance compared to the standalone VGG19 model and state-of-the-art methods confirming that the integration of the feature attention module could lead to enhanced discriminative power and improved accuracy in diabetic retinopathy detection.
Future work may explore employing the proposed methodology on larger and more diverse datasets.
## Acknowledgement
A part of this work has been supported by the IDEAS - Institute of Data Engineering, Analytics and Science Foundation, The Technology Innovation Hub at the Indian Statistical Institute, Kolkata through sanctioning a Project No /ISI/TIH/2022/55/ dtd. September 13, 2022.
|
2303.01640 | Hierarchical Graph Neural Networks for Particle Track Reconstruction | We introduce a novel variant of GNN for particle tracking called Hierarchical
Graph Neural Network (HGNN). The architecture creates a set of higher-level
representations which correspond to tracks and assigns spacepoints to these
tracks, allowing disconnected spacepoints to be assigned to the same track, as
well as multiple tracks to share the same spacepoint. We propose a novel
learnable pooling algorithm called GMPool to generate these higher-level
representations called "super-nodes", as well as a new loss function designed
for tracking problems and HGNN specifically. On a standard tracking problem, we
show that, compared with previous ML-based tracking algorithms, the HGNN has
better tracking efficiency performance, better robustness against inefficient
input graphs, and better convergence compared with traditional GNNs. | Ryan Liu, Paolo Calafiura, Steven Farrell, Xiangyang Ju, Daniel Thomas Murnane, Tuan Minh Pham | 2023-03-03T00:14:32Z | http://arxiv.org/abs/2303.01640v1 | # Hierarchical Graph Neural Networks for Particle Track Reconstruction
###### Abstract
We introduce a novel variant of GNN for particle tracking--called Hierarchical Graph Neural Network (HGNN). The architecture creates a set of higher-level representations which correspond to tracks and assigns spacepoints to these tracks, allowing disconnected spacepoints to be assigned to the same track, as well as multiple tracks to share the same spacepoint. We propose a novel learnable pooling algorithm called GMPool to generate these higher-level representations called "super-nodes", as well as a new loss function designed for tracking problems and HGNN specifically. On a standard tracking problem, we show that, compared with previous ML-based tracking algorithms, the HGNN has better tracking efficiency performance, better robustness against inefficient input graphs, and better convergence compared with traditional GNNs.
## 1 Introduction
In the upcoming High Luminosity Phase of the Large Hadron Collider (HL-LHC) [1, 2], the average number of inelastic proton-proton collisions per bunch \(\langle\mu\rangle\) (pile-up) is expected to reach 200 in the new silicon-only Inner Tracker (ITk). This will pose a significant challenge in track reconstruction due to the limited computational resources [3]. Since charged particle reconstruction ("particle tracking") dominates the CPU resources dedicated to event offline reconstruction, a new and efficient algorithm for event reconstruction becomes an urgent need. The HEP.TrkX project [4] and its successor the Exa.TrkX project [5] have studied Graph Neural Networks (GNNs) for charged particle tracking, and excellent performance on the TrackML dataset [6] has been demonstrated in Refs. [7, 8] and more recently on ITk simulation, referred to as GNN4ITk [9].
However, despite the success of GNN-based tracking algorithms, there is much in these techniques that can be improved. In particular, GNN tracking suffers from two types of errors: (1) **broken tracks** (one true track split into multiple segments) and (2) **merged tracks** (a track contains spacepoints of multiple particles). In its nature, the GNN4ITk tracking pipeline prototype [9] is a process of reducing the number of edges; starting from a graph constructed for example by a multi-layer perceptron (MLP) embedding model, filter MLP and GNN edge classifiers are applied to filter out fake edges (i.e. connecting two spacepoints of distinct particles). Thus, broken tracks are more difficult to remove than merged tracks since they can only be resolved by including more edges during the graph construction stage. As such, the pipeline is very sensitive to the efficiency of the graph constructed. Furthermore, the nature of
message-passing neural networks [10] utilized in the GNN4ITk pipeline, precludes the passing of information between disconnected components, such as the two ends of a broken track. Broken tracks not only limit the performance of edge-cut-based algorithms but also inhibit the full capability of the message-passing mechanism.
In this paper, we present a novel machine learning model called Hierarchical Graph Neural Network (HGNN) 1 for particle tracking to address the aforementioned problems. Similar to the pooling operation often used in Convolutional Neural Networks (CNN), the HGNN pools nodes into clusters called "super-nodes" to enlarge the "receptive field" of nodes to resolve the problem that a "flat" GNN cannot pass messages between disconnected components. Unlike the case of image processing where pooled pixels are already arranged on a 2D grid, the pooled super-nodes cannot use a graph induced by the original graph since disconnected components will remain disconnected. Thus we propose to utilize a K-nearest-neighbors (KNN) algorithm to build the super-graph among super-nodes to facilitate message passing between super-nodes. Furthermore, the HGNN offers us a new approach to track building, as defining a bipartite matching between nodes (spacepoints) and super-nodes (tracks). We measure the performance of this matching procedure against several baselines and show that it can not only recover broken tracks, but also produces fewer fakes tracks from merging.
Footnote 1: The code now available on github
## 2 Related Work
### The GNN4ITk Pipeline for Charged Particle Tracking
The GNN4ITk pipeline [8, 9] aims to accelerate particle tracking by utilizing geometric deep learning models. The pipeline as implemented can be divided into four steps: firstly, graph construction takes place to build a graph on the input point-cloud. With one possible construction technique, an MLP is trained to embed spacepoints into a high-dimensional space such that spacepoints belonging to the same particle gets closer in space; a fixed radius graph is then built and passed to a "filter" MLP. The filter takes in spacepoint doublets and prunes the graph down by a \(O(10)\) factor in the number of edges. A graph neural network is used to prune the graph further down. Finally, the tracks are built by running a connected components algorithm on the pruned graphs, and ambiguities are resolved by a walk-through algorithm based on topological sorting.
### Graph Pooling Algorithms
As discussed in section 1, the pooling algorithm is a crucial piece of the HGNN architecture. Graph pooling has long been studied in the context of graph neural networks as generating graph
Figure 1: The HGNN can not only shorten the distance between two nodes and effectively enlarge the receptive field but also pass messages between disconnected components
representations require some global pooling operation. Ying _et al._ introduced DiffPool [11], which pools the graph by aggregating nodes according to weights generated by a GNN. DiffPool pools the graph to a fixed number of super-nodes, and the pooled graph has a dense adjacency matrix. Lee _et al._ proposed SAGPool [12], which pools a graph by selecting top-k rank nodes and uses the subgraph induced. However, SAGPool does not support soft assignment, i.e. assigning a node to multiple super-nodes. The granularity is completely defined by the hyperparameter \(k\) and thus also pools to a fixed number of super-nodes. Diehl proposed EdgePool [13], which greedily merges nodes according to edge scores. It is capable of generating a graph that is sparse and variable in size. These pooling algorithms and their features are presented in table 1, along with our proposed pooling technique, described in section 3.1.
### Hierarchical Graph Neural Networks
Hierarchical structures of graph neural networks have been studied in the context of many graph learning problems; some of them utilize deterministic pooling algorithms or take advantage of preexisting structures to efficiently create the hierarchy [14, 15, 16, 17, 18], while the others [19, 20, 21] create the hierarchy in a learnable fashion. Compared with solely graph pooling operations [11], by retaining both pooled and original representations one has the capability of simultaneously performing node predictions and learning cluster-level information. Furthermore, as shown in [20], introducing hierarchical structures can solve the long-existing problem of the incapability of capturing long-range interactions in graphs. Empirical results also show that Hierarchical GNNs have better convergence and training stability compared with traditional flat GNNs.
## 3 Model Architecture
In order to build the model, there are several challenges that must be tackled, namely, pooling the graph, message passing in the hierarchical graph, and designing a loss function for such a model. In the following section, we introduce our proposed methods for each of them.
### Gaussian Mixture Pooling
In order to provide the features in table 1, we propose a method that leverages the connected components algorithm and Gaussian Mixture Model. The algorithm takes a set of node embeddings as input. The embeddings are then used to calculate edge-wise similarities defined as \(s_{ij}=\tanh^{-1}(\vec{v}_{i}\cdot\vec{v}_{j})\). We hypothesize that the graph consists of two types of edges, in-cluster edges and out-of-cluster edges. Then, given the distribution of node similarities, we fit a Gaussian Mixture Model (GMM) to obtain the estimation of the in-cluster and out-of-cluster distributions \(p_{in}(s)\) and \(p_{out}(s)\). An example distribution is plotted in fig. 3b. We then solve for \(s_{cut}\) by \(\ln(p_{in}(s_{cut}))-\ln(p_{out}(s_{cut}))=r\), where \(r\) is a hyperparameter defining the resolution
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline Tracking Goal & Feature & DiffPool & SAGPool & EdgePool & GMPool (ours) \\ \hline Subquadratic scaling & Sparse & ✗ & ✓ & ✓ & ✓ \\ End-to-end trainable & Differentiable & ✓ & ✓ & ✓ & ✓ \\ Variable event size & Adaptive number & ✗ & ✗ & ✓ & ✓ \\ & of clusters & & & & \\ Many hits to many & Soft assignment & ✓ & ✗ & ✗ & ✓ \\ particles relationship & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Graph Pooling Algorithms
of the pooling algorithm. The \(s_{cut}\) value that gives the best separation of in- and out-of-cluster Gaussians is chosen, and edges with scores below this value are cut. The connected components algorithm follows, and the components \(C_{\alpha}\) of the cut graph are regarded as super-nodes.
To construct super-edges, first super-node embeddings are defined as the centroid of each of the connected components in the embedding space, i.e. \(\vec{V}_{\alpha}=\frac{\vec{V}_{\alpha}^{\prime}}{\left\|\vec{V}_{\alpha}^{ \prime}\right\|_{L_{2}}}\) where \(\vec{V}_{\alpha}^{\prime}=\frac{1}{N(C_{\alpha})}\sum_{i\in C_{\alpha}}\vec{ v}_{i}\) To connect nodes with super-nodes, similar to the method used in [22], we maintain the sparsity by constructing the bipartite graph with the k-nearest neighbors algorithm. The differentiability can be restored by weighting each of the edges according to the distance in the embedding space, i.e. and \(w_{i\alpha}=\frac{\exp(v_{i}\cdot V_{\alpha})}{\sum_{\alpha\in\mathcal{N}(i)} \exp(v_{i}\cdot V_{\alpha})}\) Finally, node features are aggregated to be super-node features according to the graph weights. The super-graph construction is identical except that the k-nearest neighbor search has the same source and destination set. Thanks to its edge-cut nature, the GMPool has sub-quadratic time complexity and runs in milliseconds on our graphs.
### Hierarchical Message Passing Mechanism
In general, it is possible to stack arbitrarily many pooling layers to obtain a hierarchy of arbitrary height. However, the nature of tracking problems suggests that a spacepoint-particle hierarchy will be sufficient for tracking problems. Thus, the pooling layer in this work is kept to be of two levels. For each of the nodes, we update it by aggregating adjacent edge features, super-nodes
Figure 3: (a): schematic overview of the GMPool algorithm. (b): Distribution of edge similarities. Edges connecting spacepoints of the same particle are colored in yellow and otherwise blue.
Figure 2: A schematic overview of the HGNN architecture. A flat GNN encoder is used to transform features and embed spacepoints. A pooling algorithm (GMPool) follows to build the hierarchy using the embedded vectors. Finally, hierarchical message passing is applied iteratively to obtain final representations of both nodes and super-nodes.
features weighted by bipartite graph weights, and its own features. For each of the super-nodes, it is updated by aggregating super-edge features weighted by super graph weights, node features weighted by bipartite graph weights, and its own features. For edges and super-edges, their update rule is identical to the one used in interaction networks.
### Bipartite Classification Loss
At this point, the architecture of HGNN is possible to train on traditional tasks such as node-embedding thanks to GMPool's differentiability. This feature is useful for apples-to-apples comparisons between flat and hierarchical GNNs under the same training regimes. However, to exploit the full potential of the HGNN, we propose a new training regime for it specifically. The most natural way of doing track labeling with HGNN is to use super-nodes as track candidates. For each of the spacepoint-track pairs (bipartite edges), a score is produced to determine if it belongs to a specific track. A maximum-weight bipartite matching algorithm is used to match tracks to super-nodes to define the "truth" for each of the bipartite edges. The loss is given by the binary cross-entropy loss defined by the matched truth. An auxiliary hinge embedding loss is also used for the first warm-up epochs to help the embedding space stably initialize.
## 4 Results
### Dataset
In this paper, the dataset used to report the performance of HGNN is that of the TrackML Challenge[6]. The TrackML dataset contains events of simulated proton-proton collisions at \(\sqrt{s}=14\mathrm{TeV}\) with pile-up \(\langle\mu\rangle=200\). Details can be found in [6]. The HGNN has been evaluated in two scenarios; the first scenario is called TrackML-full and contains \(2200\) filter-processed events, each with approximately \(O(7k)\) particles and \(O(120k)\) spacepoints. In addition to that, an extensive test of robustness has been done on Bipartite Classifiers, using a simplified dataset TrackML-1GeV. We take the subgraph induced by removing any track below \(p_{T}=1\mathrm{GeV}\). Such an event typically consists of \(O(1k)\) particles and \(O(10k)\) spacepoints.
### Evaluation
The evaluation metric is tracking efficiency and purity. A particle is matched to a track candidate if **(1)**: the track candidate contains more than \(50\%\) of the spacepoints left by the particle and **(2)**: more than \(50\%\) of the spacepoints in the track candidate are left by the particle. A track is called reconstructable if it **(1)** left more than \(5\) spacepoints in the detector and **(2)** has \(p_{T}\geq 1\mathrm{GeV}\). The tracking efficiency and fake rate (FR) are thus defined as:
\[\mathrm{Eff}:=\frac{N(\mathrm{matched},\mathrm{reconstructable})}{N(\mathrm{ reconstructable})}\qquad\qquad\mathrm{FR}:=1-\frac{N(\mathrm{matched})}{N(\mathrm{track\ candidates})}\]
### Experiments
We evaluate four models on the TrackML-full dataset. **(1)**: Embedding Flat GNN (E-GNN), **(2)**: Embedding Hierarchical GNN (E-HGNN), **(3)**: Bipartite Classifier Hierarchical GNN (BC-HGNN), **(4)**: Edge Classifier Flat GNN (EC-GNN). The first two serve for apples-to-apples comparisons between flat and hierarchical GNNs - the loss function is the same as the hinge embedding loss used for the metric learning graph construction; tracks candidates are selected by applying a spatial clustering algorithm (H-DBSCAN). The third model represents the state-of-the-art hierarchical GNN for particle tracking; the last one is identical to the GNN4ITk pipeline, and serves as a baseline. The performance of a truth-level connected-components (Truth-CC) track builder are also reported; this takes in filter-processed graphs and prunes them down with ground truth. It is a measure of the graph quality and also an upper bound of edge classifier flat GNN performance. The timing results are obtained on a single Nvidia A100 GPU. To test
robustness against edge inefficiency, we remove 0%, 20%, 30%, and 40% of the edges and train the Bipartite Classifier model to compare it with the Truth-CC.
## 5 Conclusion
In this paper, we introduced a novel graph neural network called a hierarchical graph neural network. We also proposed a new learnable pooling algorithm called GMPool to construct the hierarchy. The architecture successfully resolved the issues of GNN being incapable of capturing long-range interactions and the GNN particle tracking pipeline being sensitive to graphs' efficiency. Creating higher-level representations both shortens the distance between distant nodes in graphs and offers new methods of building track candidates. Empirical results demonstrate that Hierarchical GNNs have superior performance compared with flat GNNs. The hierarchical GNN is available at [https://github.com/ryanliu30/HierarchicalGNN](https://github.com/ryanliu30/HierarchicalGNN) and has been integrated into the common framework of the GNN4ITk pipeline [23].
## 6 Acknowledgements
This research was supported in part by: the U.S. Department of Energy's Office of Science, Office of High Energy Physics, under Contracts No. DE-AC02-05CH11231 (CompHEP Exa.TrkX). This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Models & E-GNN & E-HGNN & BC-HGNN & EC-GNN & Truth-CC \\ \hline Efficiency & 94.61\% & 95.60\% & **97.86\%** & 96.35\% & 97.75\% \\ Fake Rate & 47.31\% & 47.45\% & **36.71\%** & 55.58 \% & 57.67\% \\ Time (sec.) & 2.17 & 2.64 & 1.07 & **0.22** & 0.07 \\ \hline \hline \end{tabular}
\end{table}
Table 2: TrackML-Full experiment results. Comparison between embedding models shows that hierarchical structure can enhance the expressiveness of GNNs. Comparing Bipartite Classifiers with the Truth CC, we can see that Bipartite Classifiers can recover some of the tracks that cannot be reconstructed by edge-based GNNs2. The timing results also show that HGNN scales to large input graphs of HL-LHC events competitively with other embedding GNNs
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline Percent Edge Removed & 0\% & 10\% & 20\% & 30\% & 40\% & 50\% \\ \hline BC Efficiency & 98.55\% & 98.39\% & 97.68\% & 96.63\% & 95.10\% & 92.79\% \\ BC Fake Rate & 1.23\% & 1.55\% & 2.13\% & 3.10\% & 4.75\% & 7.31\% \\ Truth-CC Efficiency & 98.72\% & 96.21\% & 92.31\% & 85.81\% & 77.26\% & 64.81\% \\ Truth-CC Fake Rate & 5.87\% & 15.53\% & 24.40\% & 33.48\% & 42.99\% & 53.12\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: TrackML-1GeV extensive robustness test results. We can see that Bipartite Classifiers (BC) are very robust against inefficiencies, whereas edge-based GNN’s performance is strongly influenced by missing edges. |
2301.00327 | Convergence and Generalization of Wide Neural Networks with Large Bias | This work studies training one-hidden-layer overparameterized ReLU networks
via gradient descent in the neural tangent kernel (NTK) regime, where the
networks' biases are initialized to some constant rather than zero. The
tantalizing benefit of such initialization is that the neural network will
provably have sparse activation through the entire training process, which
enables fast training procedures. The first set of results characterizes the
convergence of gradient descent training. Surprisingly, it is shown that the
network after sparsification can achieve as fast convergence as the dense
network, in comparison to the previous work indicating that the sparse networks
converge slower. Further, the required width is improved to ensure gradient
descent can drive the training error towards zero at a linear rate. Secondly,
the networks' generalization is studied: a width-sparsity dependence is
provided which yields a sparsity-dependent Rademacher complexity and
generalization bound. To our knowledge, this is the first sparsity-dependent
generalization result via Rademacher complexity. Lastly, this work further
studies the least eigenvalue of the limiting NTK. Surprisingly, while it is not
shown that trainable biases are necessary, trainable bias, which is enabled by
our improved analysis scheme, helps to identify a nice data-dependent region
where a much finer analysis of the NTK's smallest eigenvalue can be conducted.
This leads to a much sharper lower bound on the NTK's smallest eigenvalue than
the one previously known and, consequently, an improved generalization bound. | Hongru Yang, Ziyu Jiang, Ruizhe Zhang, Zhangyang Wang, Yingbin Liang | 2023-01-01T02:11:39Z | http://arxiv.org/abs/2301.00327v2 | # Convergence and Generalization of Wide Neural Networks with Large Bias
###### Abstract
This work studies training one-hidden-layer overparameterized ReLU networks via gradient descent in the neural tangent kernel (NTK) regime, where the networks' biases are initialized to some constant rather than zero. The tantalizing benefit of such initialization is that the neural network will provably have sparse activation through the entire training process, which enables fast training procedures. The first set of results characterizes the convergence of gradient descent training. Surprisingly, it is shown that the network after sparsification can achieve as fast convergence as the dense network, in comparison to the previous work indicating that the sparse networks converge slower. Further, the required width is improved to ensure gradient descent can drive the training error towards zero at a linear rate. Secondly, the networks' generalization is studied: a width-sparsity dependence is provided which yields a sparsity-dependent Rademacher complexity and generalization bound. To our knowledge, this is the first sparsity-dependent generalization result via Rademacher complexity. Lastly, this work further studies the least eigenvalue of the limiting NTK. Surprisingly, while it is not shown that trainable biases are necessary, trainable bias, which is enabled by our improved analysis scheme, helps to identify a nice data-dependent region where a much finer analysis of the NTK's smallest eigenvalue can be conducted. This leads to a much sharper lower bound on the NTK's smallest eigenvalue than the one previously known and, consequently, an improved generalization bound.
## 1 Introduction
The literature of sparse neural networks can be dated back to the early work of LeCun et al. (1989) where they showed that a fully-trained neural network can be pruned to preserve generalization. Recently, training sparse neural networks has been receiving increasing attention since the discovery of the lottery ticket hypothesis (Frankle and Carbin, 2018). In their work, they showed that if we repeatedly train and prune a neural network and then rewind the weights to the initialization, we are able to find a sparse neural network that can be trained to match the performance of its dense counterpart. However, this method is more of a proof of concept and is computationally expensive for any practical purposes. Nonetheless, this inspires further interest in the machine learning community to develop efficient methods to find the sparse pattern at the initialization such that the performance of the sparse network can match the dense network after training (Lee
et al., 2018; Wang et al., 2019; Tanaka et al., 2020; Liu and Zenke, 2020; Chen et al., 2021; He et al., 2017; Liu et al., 2021b).
On the other hand, instead of trying to find some desirable sparsity patterns at the initialization, another line of research has been focusing on inducing the sparsity pattern naturally and then cleverly utilizing such sparse structure via techniques like high-dimensional geometric data structures, sketching or even quantum algorithms to speedup per-step gradient descent training (Song et al., 2021a, b; Hu et al., 2022; Gao et al., 2022). In this line of theoretical studies, the sparsity is induced by shifted ReLU which is the same as initializing the bias of the network's linear layer to some large constant instead of zero and holding the bias fixed throughout the entire training. By the concentration of Gaussian, at the initialization, the total number of activated neurons (i.e., ReLU will output some non-zero value) will be _sublinear_ in the total number \(m\) of neurons, as long as the bias is initialized to be \(C\sqrt{\log m}\) for some appropriate constant \(C\). We call this _sparsity-inducing initialization_. If the network is in the NTK regime, each neuron weight will exhibit microscopic change after training, and thus the sparsity can be preserved throughout the entire training process. Therefore, during the entire training process, only a sublinear number of the neuron weights need to be updated, which can significantly speedup the training process.
The focus of this work is along the above line of theoretical studies of sparsely activated overparameterized neural networks and address the two main research limitations in the aforementioned studies: (1) prior work indicates that the sparse networks have **slower convergence guarantee** than the dense network, despite that the per step gradient descent training can be made cheaper and (2) the previous works only provided the convergence guarantee, while **lacking the generalization analysis** which is of central interest in deep learning theory. Thus, our study will fill the above important gaps, by providing a comprehensive study of training one-hidden-layer sparsely activated neural networks in the NTK regime with (a) finer analysis of the convergence; and (b) first generalization bound for such sparsely activated neural networks after training with sharp bound on the restricted smallest eigenvalue of the limiting NTK. We further elaborate our technical contributions are follows:
1. **Convergence.** Surprisingly, Theorem 3.1 shows that the network after sparsification can achieve as fast convergence as the original network. This is made possible by the fact that the sparse networks allow a much more relaxed condition on the learning rate, which was not discovered in the previous work. The theorem further provides an improved required width to ensure that gradient descent can drive the training error towards zero at a linear rate. At the core of our convergence result is a finer analysis where the required network width to ensure convergence is made much smaller, with an improvement upon the previous result by a factor of \(\widetilde{\Theta}(n^{8/3})\) under appropriate bias initialization, where \(n\) is the sample size. This relies on our novel development of (1) a better characterization of the activation flipping probability via an analysis of the Gaussian anti-concentration based on the location of the strip and (2) a finer analysis of the initial training error.
2. **Generalization.** Theorem 3.8 studies the generalization of the network after gradient descent training where we characterize how the network width should depend on activation sparsity, which lead to a sparsity-dependent localized Rademacher complexity and generalization bound. When the sparsity parameter is set to zero (i.e., the activation is not sparsified), our bound matches previous analysis up to logarithmic factors. To our knowledge, this is the first sparsity-dependent generalization result via localized Rademacher complexity. In addition, compared with previous works, our result yields a better width's dependence by a factor of
\(n^{10}\). This relies on (1) the usage of symmetric initialization and (2) a finer analysis of the weight matrix change in Frobenius norm in Lemma 3.13.
3. **Restricted Smallest Eigenvalue.** Theorem 3.8 shows that the generalization bound heavily depends on the smallest eigenvalue \(\lambda_{\min}\) of the limiting NTK. However, the previously known worst-case lower bounds on \(\lambda_{\min}\) under data separation have a \(1/n^{2}\) explicit dependence in (Oymak and Soltanolkotabi, 2020; Song et al., 2021), making the generalization bound vacuous. Instead, we note that our improved convergence analysis in Theorem 3.1 can handle trainable bias. Relying on our new result that the change of bias is also diminishing with a \(O(1/\sqrt{m})\) dependence on the network width \(m\), we show that even though the biases are allowed to be updated by gradient descent, the network's activation remains sparse during the entire training. Based on that, our Theorem 3.11 establishes a much sharper lower bound restricted to a data-dependent region, which is sample-size-independent. This hence yields a worst-case generalization bound for _bounded_ loss of \(O(1)\) as opposed to \(O(n)\) in previous analysis, given that the label vector is in this region, which can be achieved with simple label-shifting.
### Further Related Works
Besides the works mentioned in the introduction, another work related to ours is (Liao and Kyrillidis, 2022) where they also considered training a one-hidden-layer neural network with sparse activation and studied its convergence. However, different from our work, their sparsity is induced by sampling a random mask at each step of gradient descent whereas our sparsity is induced by non-zero initialization of the bias terms. Also, their network has no bias term, and they only focus on studying the training convergence but not generalization. We discuss additional related works here.
**Training Overparameterized Neural Networks.** Over the past few years, a tremendous amount of efforts have been made to study training overparameterized neural networks. A series of works have shown that if the neural network is wide enough (polynomial in depth, number of samples, etc), gradient descent can drive the training error towards zero in a fast rate either explicitly (Du et al., 2018, 2019; Ji and Telgarsky, 2019) or implicitly (Allen-Zhu et al., 2019; Zou and Gu, 2019; Zou et al., 2020) using the neural tangent kernel (NTK) (Jacot et al., 2018). Further, under some conditions, the networks can generalize (Cao and Gu, 2019). Under the NTK regime, the trained neural network can be well-approximated by its first order Taylor approximation from the initialization and Liu et al. (2020) showed that this transition to linearity phenomenon is a result from a diminishing Hessian 2-norm with respect to width. Later on, Frei and Gu (2021) and Liu et al. (2022) showed that closeness to initialization is sufficient but not necessary for gradient descent to achieve fast convergence as long as the non-linear system satisfies some variants of the Polyak-Lojasiewicz condition. On the other hand, although NTK offers good convergence explanation, it contradicts the practice since (1) the neural networks need to be unrealistically wide and (2) the neuron weights merely change from the initialization. As Chizat et al. (2019) pointed out, this "lazy training" regime can be explained by a mere effect of scaling. To go beyond NTK, there are other works considering the mean-field limit (Chizat and Bach, 2018; Mei et al., 2019; Chen et al., 2020), and feature learning (Allen-Zhu and Li, 2020, 2022; Shi et al., 2021; Telgarsky, 2022).
**Sparse Neural Networks in Practice.** Besides finding a fixed sparse mask at the initialization as we mentioned in introduction, on the other hand, dynamic sparse training allows the sparse mask to be updated during training, e.g., (Mocanu et al., 2018; Mostafa and Wang, 2019; Evci et al., 2020; Jayakumar et al., 2020; Liu et al., 2021a,c,d).
## 2 Preliminaries
**Notations.** We use \(\left\lVert\cdot\right\rVert_{2}\) to denote vector or matrix 2-norm and \(\left\lVert\cdot\right\rVert_{F}\) to denote the Frobenius norm of a matrix. When the subscript of \(\left\lVert\cdot\right\rVert\) is unspecified, it is default to be the 2-norm. For matrices \(A\in\mathbb{R}^{m\times n_{1}}\) and \(B\in\mathbb{R}^{m\times n_{2}}\), we use \([A,B]\) to denote the row concatenation of \(A,B\) and thus \([A,B]\) is a \(m\times(n_{1}+n_{2})\) matrix. For matrix \(X\in\mathbb{R}^{m\times n}\), the row-wise vectorization of \(X\) is denoted by \(\vec{X}=[x_{1},x_{2},\ldots,x_{m}]^{\top}\) where \(x_{i}\) is the \(i\)-th row of \(X\). For a given integer \(n\in\mathbb{N}\), we use \([n]\) to denote the set \(\{0,\ldots,n\}\), i.e., the set of integers from \(0\) to \(n\). For a set \(S\), we use \(\overline{S}\) to denote the complement of \(S\). We use \(\mathcal{N}(\mu,\sigma^{2})\) to denote the Gaussian distribution with mean \(\mu\) and standard deviation \(\sigma\). In addition, we use \(\widetilde{O},\widetilde{\Theta},\widetilde{\Omega}\) to suppress (poly-)logarithmic factors in \(O,\Theta,\Omega\).
### Problem Formulation
Let the training set to be \((X,y)\) where \(X=(x_{1},x_{2},\ldots,x_{n})\in\mathbb{R}^{d\times n}\) denotes the feature matrix consisting of \(n\)\(d\)-dimensional vectors, and \(y=(y_{1},y_{2},\ldots,y_{n})\in\mathbb{R}^{n}\) consists of the corresponding \(n\) response variables. We assume \(\left\lVert x_{i}\right\rVert_{2}\leq 1\) and \(y_{i}=O(1)\) for all \(i\in[n]\). We use one-hidden-layer neural network and consider the regression problem with the square loss function:
\[f(x;W,b) :=\frac{1}{\sqrt{m}}\sum_{r=1}^{m}a_{r}\sigma(\langle w_{r},x \rangle-b_{r}),\] \[L(W,b) :=\frac{1}{2}\sum_{i=1}^{n}(f(x_{i};W,b)-y_{i})^{2},\]
where \(W\in\mathbb{R}^{m\times d}\) with its \(r\)-th row being \(w_{r}\), \(b\in\mathbb{R}^{m}\) is a vector with \(b_{r}\) being the bias of \(r\)-th neuron, \(a_{r}\) is the second layer weight, and \(\sigma(\cdot)\) denotes the ReLU activation function. We initialize the neural network by \(W_{r,i}\sim\mathcal{N}(0,1)\) and \(a_{r}\sim\text{Uniform}(\{\pm 1\})\) and \(b_{r}=B\) for some value \(B\geq 0\) of choice, for all \(r\in[m],\ i\in[d]\). We train only the parameters \(W\) and \(b\) via gradient descent (i.e., with the linear layer \(a_{r},\ r\in[m]\) fixed), the updates are given by
\[[w_{r},b_{r}](t+1)=[w_{r},b_{r}](t)-\eta\frac{\partial L(W(t),b(t))}{\partial[w_ {r},b_{r}]}.\]
By the chain rule, we have \(\frac{\partial L}{\partial w_{r}}=\frac{\partial L}{\partial f}\frac{ \partial f}{\partial w_{r}}\). The gradient of the loss with respect to the network is \(\frac{\partial L}{\partial f}=\sum_{i=1}^{n}(f(x_{i};W,b)-y_{i})\) and the network gradients with respect to weights and bias are
\[\frac{\partial f(x;W,b)}{\partial w_{r}} =\frac{1}{\sqrt{m}}a_{r}\mathbb{I}(w_{r}^{\top}x\geq b_{r}),\] \[\frac{\partial f(x;W,b)}{\partial b_{r}} =-\frac{1}{\sqrt{m}}a_{r}\mathbb{I}(w_{r}^{\top}x\geq b_{r}),\]
where \(\mathbb{I}(\cdot)\) is the indicator function. We use the shorthand \(\mathbb{I}_{r,i}:=\mathbb{I}(w_{r}^{\top}x_{i}\geq b_{r})\) and we define the **NTK** matrix \(H\) as
\[H_{i,j}(W,b) :=\left\langle\frac{\partial f(x_{i};W,b)}{\partial[W,b]},\frac{ \partial f(x_{j};W,b)}{\partial[W,b]}\right\rangle \tag{2.1}\] \[=\frac{1}{m}\sum_{r=1}^{m}(\langle x_{i},x_{j}\rangle+1)\mathbb{I }_{r,i}\mathbb{I}_{r,j},\]
and the **infinite-width version**\(H^{\infty}(B)\) of the NTK matrix \(H\) is given by
\[H_{ij}^{\infty}(B):=\operatorname*{\mathbb{E}}_{w}\left[(\langle x_{i},x_{j} \rangle+1)\mathbb{I}(w^{\top}x_{i}\geq B,w^{\top}x_{j}\geq B)\right].\]
Let \(\lambda(B):=\lambda_{\min}(H^{\infty}(B))\). We define the matrix \(Z(W,b)\in\mathbb{R}^{m(d+1)\times n}\) as
\[Z(W,b):=\frac{1}{\sqrt{m}}\begin{bmatrix}\mathbb{I}_{1,1}a_{1}\widetilde{x}_ {1}&\ldots&\mathbb{I}_{1,n}a_{1}\widetilde{x}_{n}\\ \vdots&\ddots&\vdots\\ \mathbb{I}_{m,1}a_{m}\widetilde{x}_{1}&\ldots&\mathbb{I}_{m,n}a_{m}\widetilde{ x}_{n}\end{bmatrix},\]
where \(\widetilde{x}_{i}:=[x_{i}^{\top},-1]^{\top}\). Note that \(H(W,b)=Z(W,b)^{\top}Z(W,b)\). Hence, the gradient descent step can be written as
\[[W,b](\vec{t}+1)=[W,\vec{b}](t)-\eta Z(t)(f(t)-y),\]
where \([W,b](t)\in\mathbb{R}^{m\times(d+1)}\) denotes the row-wise concatenation of \(W(t)\) and \(b(t)\) at the \(t\)-th step of gradient descent, and \(Z(t):=Z(W(t),b(t))\).
## 3 Main Theory
### Convergence and Sparsity
We first present the convergence of gradient descent for the sparsely activated neural networks. Surprisingly, we show that the sparse network can achieve as fast convergence as the dense network compared to the previous work (Song et al., 2021) which, on the other hand, shows the sparse networks converge slower than the dense networks.
**Theorem 3.1** (Convergence).: _Let the learning rate \(\eta\leq O(\frac{\lambda(B)\exp(B^{2})}{n^{2}})\), and the bias initialization \(B\in[0,\sqrt{0.5\log m}]\). Assume \(\lambda(B)=\lambda_{0}\exp(-B^{2}/2)\) for some \(\lambda_{0}>0\) independent of \(B\). Then, if the network width satisfies \(m\geq\widetilde{\Omega}\left(\lambda_{0}^{-4}n^{4}\exp(B^{2})\right)\), with probability at least \(1-\delta-e^{-\Omega(n)}\) over the randomness in the initialization,_
\[\forall t:L(W(t),b(t))\leq(1-\eta\lambda(B)/4)^{t}L(W(0),b(0)).\]
The assumption on \(\lambda(B)\) in Theorem 3.1 can be justified by (Song et al., 2021, Theorem F.1) which shows that under some mild conditions, the NTK's least eigenvalue \(\lambda(B)\) is positive and has an \(\exp(-B^{2}/2)\) dependence. Given this, Theorem 3.1 in fact implies that the convergence rate is _independent_ of the sparsity parameter due to the extra \(\exp(B^{2})\) term in the learning rate. This means that the network with sparse activation can achieve as fast convergence as the original network. Our study further handles trainable bias (with constant initialization). This is done by a
new result in Lemma A.9 that the change of bias is also diminishing with a \(O(1/\sqrt{m})\) dependence on the network width \(m\).
**Remark 3.2**.: _Theorem 3.1 establishes a much sharper bound on the width of the neural network than previous work to guarantee the linear convergence. To elaborate, our bound only requires \(m\geq\widetilde{\Omega}\left(\lambda_{0}^{-4}n^{4}\exp(B^{2})\right)\), as opposed to the bound \(m\geq\widetilde{\Omega}(\lambda_{0}^{-4}n^{4}B^{2}\exp(2B^{2}))\) in (Song et al., 2021a, Lemma D.9). If we take \(B=\sqrt{0.25\log m}\) (as allowed by the theorem), then our lower bound yields a polynomial improvement by a factor of \(\widetilde{\Theta}(n/\lambda_{0})^{8/3}\), which implies that the neural network width can be much smaller to achieve the same linear convergence._
**Key results in the proof of Theorem 3.1.** The proof mainly consists of a novel analysis on activation flipping probability and a finer upper bound on initial error, as we elaborate.
Like previous works, in order to prove convergence, we need to show that the NTK during training is close to its initialization. Inspecting the expression of NTK in Equation (2.1), observe that the training will affect the NTK by changing the output of each indicator function. We say that the \(r\)-th neuron flips its activation with respect to input \(x_{i}\) at the \(k\)-th step of gradient descent if \(\mathbb{I}(w_{r}(k)^{\top}x_{i}-b_{r}(k)>0)\neq\mathbb{I}(w_{r}(k-1)^{\top}x_{ i}-b_{r}(k-1)>0)\) for all \(r\in[m]\). The central idea is that for each neuron, as long as the weight and bias movement \(R_{w},R_{b}\) from its initialization is small, then the probability of activation flipping (with respect to random initialization) should not be large. We first present the bound on the probability that a neuron flips its activation.
**Lemma 3.3** (Bound on Activation flipping probability).: _Let \(B\geq 0\) and \(R_{w},R_{b}\leq\min\{1/B,1\}\). Let \(\widetilde{W}=(\widetilde{w}_{1},\ldots,\widetilde{w}_{m})\) be vectors generated i.i.d. from \(\mathcal{N}(0,I)\) and \(\widetilde{b}=(\widetilde{b}_{1},\ldots,\widetilde{b}_{m})=(B,\ldots,B)\), and weights \(W=(w_{1},\ldots,w_{m})\) and biases \(b=(b_{1},\ldots,b_{m})\) that satisfy for any \(r\in[m]\), \(\left\|\widetilde{w}_{r}-w_{r}\right\|_{2}\leq R_{w}\) and \(\widetilde{b}_{r}-b_{r}|\leq R_{b}\). Define the event_
\[A_{i,r}=\{\exists w_{r},b_{r}:\left\|\widetilde{w}_{r}-w_{r} \right\|_{2}\leq R_{w},\ |b_{r}-\widetilde{b}_{r}|\leq R_{b},\] \[\mathbb{I}(x_{i}^{\top}\widetilde{w}_{r}\geq\widetilde{b}_{r}) \neq\mathbb{I}(x_{i}^{\top}w_{r}\geq b_{r})\}.\]
_Then, for some constant \(c\),_
\[\mathbb{P}\left[A_{i,r}\right]\leq c(R_{w}+R_{b})\exp(-B^{2}/2).\]
(Song et al., 2021a, Claim C.11) presents a \(O(\min\{R,\exp(-B^{2}/2)\})\) bound on \(\mathbb{P}[A_{i,r}]\). The reason that their bound involving the min operation is because \(\mathbb{P}[A_{i,r}]\) can be bounded by the standard Gaussian tail bound and Gaussian anti-concentration bound separately and then, take the one that is smaller. On the other hand, our bound replaces the min operation by the product which creates a more convenient (and tighter) interpolation between the two bounds. Later, we will show that the maximum movement of neuron weights and biases, \(R_{w}\) and \(R_{b}\), both have a \(O(1/\sqrt{m})\) dependence on the network width, and thus our bound offers a \(\exp(-B^{2}/2)\) improvement where \(\exp(-B^{2}/2)\) can be as small as \(1/m^{1/4}\) when we take \(B=\sqrt{0.5\log m}\).
**Proof idea of Lemma 3.3.** First notice that \(\mathbb{P}[A_{i,r}]=\mathbb{P}_{x\sim\mathcal{N}(0,1)}[|x-B|\leq R_{w}+R_{b}]\). Thus, here we are trying to solve a fine-grained Gaussian anti-concentration problem with the strip centered at \(B\). The problem with the standard Gaussian anti-concentration bound is that it only provides a worst case bound and, thus, is location-oblivious. Centered in our proof is a nice Gaussian anti-concentration bound based on the location of the strip, which we describe as follows: Let's first assume \(B>R_{w}+R_{b}\). A simple probability argument yields a bound of
\(2(R_{w}+R_{b})\frac{1}{\sqrt{2\pi}}\exp(-(B-R_{w}-R_{b})^{2})\). Since later in the Appendix we can show that \(R_{w}\) and \(R_{b}\) have a \(O(1/\sqrt{m})\) dependence (Lemma A.9 bounds the movement for gradient descent and Lemma A.10 for gradient flow) and we only take \(B=O(\sqrt{\log m})\), by making \(m\) sufficiently large, we can safely assume that \(R_{w}\) and \(R_{b}\) is sufficiently small. Thus, the probability can be bounded by \(O((R_{w}+R_{b})\exp(-B^{2}/2))\). However, when \(B<R_{w}+R_{b}\) the above bound no longer holds. But a closer look tells us that in this case \(B\) is close to zero, and thus \((R_{w}+R_{b})\frac{1}{\sqrt{2\pi}}\exp(-B^{2}/2)\approx\frac{R_{w}+R_{b}}{ \sqrt{2\pi}}\) which yields roughly the same bound as the standard Gaussian anti-concentration.
Next, our analysis develops the following initial error bound.
**Lemma 3.4** (Initial error upper bound).: _Let \(B>0\) be the initialization value of the biases and all the weights be initialized from standard Gaussian. Let \(\delta\in(0,1)\) be the failure probability. Then, with probability at least \(1-\delta\) over the randomness in the initialization, we have_
\[L(0)=O\left(n+n\left(\exp(-\frac{B^{2}}{2})+\frac{1}{m}\right)\log^{3}(\frac{ 2mn}{\delta})\right).\]
(Song et al., 2021, Claim D.1) gives a rough estimate of the initial error with \(O(n(1+B^{2})\log^{2}(n/\delta)\log(m/\delta))\) bound. When we set \(B=C\sqrt{\log m}\) for some constant \(C\), our bound improves the previous result by a polylogarithmic factor. The previous bound is not tight in the following two senses: (1) the bias will only decrease the magnitude of the neuron activation instead of increasing and (2) when the bias is initialized as \(B\), only roughly \(O(\exp(-B^{2}/2))\cdot m\) neurons will activate. Thus, we can improve the \(B^{2}\) dependence to \(\exp(-B^{2}/2)\).
By combining the above two improved results, we can prove our convergence result with improved lower bound of \(m\) as in Remark 3.2. To relax the condition on the learning rate for the sparse network, a finer analysis of the error terms is conducted in Lemma A.17 by leveraging the fact that the network has sparse activation. This later translates into a wider range of learning rate choice in the convergence analysis. We provide the complete proof in Appendix A.
Lastly, since the total movement of each neuron's bias has a \(O(1/\sqrt{m})\) dependence (shown in Lemma A.9), combining with the number of activated neurons at the initialization, we can show that during the entire training, the number of activated neurons is small.
**Lemma 3.5** (Number of Activated Neurons per Iteration).: _Assume the parameter settings in Theorem 3.1. With probability at least \(1-e^{-\Omega(n)}\) over the random initialization,_
\[|\mathcal{S}_{\mathrm{on}}(i,t)|=O(m\cdot\exp(-B^{2}/2))\]
_for all \(0\leq t\leq T\) and \(i\in[n]\), where \(\mathcal{S}_{\mathrm{on}}(i,t)=\{r\in[m]:\ w_{r}(t)^{\top}x_{i}\geq b_{r}(t)\}\)._
### Generalization and Restricted Least Eigenvalue
In this section, we present our sparsity-dependent generalization result. For technical reasons stated in Section 3.3, we use symmetric initialization defined below. Further, we adopt the setting in (Arora et al., 2019) and use a non-degenerate data distribution to make sure the infinite-width NTK is positive definite.
**Definition 3.6** (Symmetric Initialization).: _For a one-hidden layer neural network with \(2m\) neurons, the network is initialized as the following:_
1. _For_ \(r\in[m]\)_, independently initialize_ \(w_{r}\sim\mathcal{N}(0,I)\) _and_ \(a_{r}\sim\mathrm{Uniform}(\{-1,1\})\)_._
_._
2. _For_ \(r\in\{m+1,\ldots,2m\}\)_, let_ \(w_{r}=w_{r-m}\) _and_ \(a_{r}=-a_{r-m}\)_._
**Definition 3.7** (\((\lambda_{0},\delta,n)\)-non-degenerate distribution, (Arora et al., 2019)).: _A distribution \(\mathcal{D}\) over \(\mathbb{R}^{d}\times\mathbb{R}\) is \((\lambda_{0},\delta,n)\)-non-degenerate, if for \(n\) i.i.d. samples \(\{(x_{i},y_{i})\}_{i=1}^{n}\) from \(\mathcal{D}\), with probability \(1-\delta\) we have \(\lambda_{\min}(H^{\infty}(B))\geq\lambda_{0}>0\)._
**Theorem 3.8**.: _Fix a failure probability \(\delta\in(0,1)\) and an accuracy parameter \(\epsilon\in(0,1)\). Suppose the training data \(S=\{(x_{i},y_{i})\}_{i=1}^{n}\) are i.i.d. samples from a \((\lambda,\delta,n)\)-non-degenerate distribution \(\mathcal{D}\) defined in Definition 3.7. Assume the one-hidden layer neural network is initialized by symmetric initialization in Definition 3.6. Further, assume the parameter settings in Theorem 3.1 except we let \(m\geq\widetilde{\Omega}\left(\lambda(B)^{-6}n^{6}\exp(-B^{2})\right)\). Consider any loss function \(\ell:\mathbb{R}\times\mathbb{R}\rightarrow[0,1]\) that is \(1\)-Lipschitz in its first argument. Then with probability at least \(1-2\delta-e^{-\Omega(n)}\) over the randomness in symmetric initialization of \(W(0)\in\mathbb{R}^{m\times d}\) and \(a\in\mathbb{R}^{m}\) and the training samples, the two layer neural network \(f(W(t),b(t),a)\) trained by gradient descent for \(t\geq\Omega(\frac{1}{n\lambda(B)}\log\frac{n\log(1/\delta)}{\epsilon})\) iterations has empirical Rademacher complexity (see its formal definition in Definition C.1 in Appendix) bounded as_
\[\mathcal{R}_{S}(\mathcal{F})\] \[\leq\sqrt{\frac{y^{\top}(H^{\infty}(B))^{-1}y\cdot 8e^{-B^{2}/2}}{n }}+\widetilde{O}\left(\frac{e^{-B^{2}/4}}{n^{1/2}}\right)\]
_and the population loss \(L_{\mathcal{D}}(f)=\mathbb{E}_{(x,y)\sim\mathcal{D}}[\ell(f(x),y)]\) can be upper bounded as_
\[L_{\mathcal{D}}(f(W(t),b(t),a)) \tag{3.1}\] \[\leq\sqrt{\frac{y^{\top}(H^{\infty}(B))^{-1}y\cdot 32e^{-B^{2}/2}}{n }}+\widetilde{O}\left(\frac{1}{n^{1/2}}\right).\]
To show good generalization, we need a larger width: the second term in the Rademacher complexity bound is diminishing with \(m\) and to make this term \(O(1/\sqrt{n})\), the width needs to have \((n/\lambda(B))^{6}\) dependence as opposed to \((n/\lambda(B))^{4}\) for convergence. Now, at the first glance of our generalization result, it seems we can make the Rademacher complexity arbitrarily small by increasing \(B\). Recall from the discussion of Theorem 3.1 that the smallest eigenvalue of \(H^{\infty}(B)\) also has an \(\exp(-B^{2}/2)\) dependence. Thus, in the worst case, the \(\exp(-B^{2}/2)\) factor gets canceled and sparsity will not hurt the network's generalization.
Before we present the proof, we make a corollary of Theorem 3.8 for the zero-initialized bias case.
**Corollary 3.9**.: _Take the same setting as in Theorem 3.8 except now the biases are initialized as zero, i.e., \(B=0\). Then, if we let \(m\geq\widetilde{\Omega}(\lambda(0)^{-6}n^{6})\), the empirical Rademacher complexity and population loss are both bounded by_
\[\mathcal{R}_{S}(\mathcal{F}),\ L_{\mathcal{D}}(f(W(t),b(t),a))\] \[\leq\sqrt{\frac{y^{\top}(H^{\infty}(0))^{-1}y\cdot 32}{n}}+ \widetilde{O}\left(\frac{1}{n^{1/2}}\right).\]
Corollary 3.9 requires the network width \(m\geq\widetilde{\Omega}((n/\lambda(0))^{6})\) which significantly improves upon the previous result in (Song and Yang, 2019, Theorem G.7) \(m\geq\widetilde{\Omega}(n^{16}\operatorname{poly}(1/\lambda(0)))\) (including the dependence on the rescaling factor \(\kappa\)) which is a much wider network.
**Generalization Bound via Least Eigenvalue.** Note that in Theorem 3.8, the worst case of the first term in the generalization bound in Equation (3.1) is given by \(\widetilde{O}(\sqrt{1/\lambda(B)})\). Hence, the least eigenvalue \(\lambda(B)\) of the NTK matrix can significantly affect the generalization bound. Previous works (Oymak and Soltanolkotabi, 2020; Song et al., 2021) established lower bounds on \(\lambda(B)\) with an explicit \(1/n^{2}\) dependence on \(n\) under the \(\delta\) data separation assumption (see Theorem 3.11), which clearly makes a vacuous generalization bound of \(\widetilde{O}(n)\). This thus motivates us to provide a tighter bound (desirably independent on \(n\)) on the least eigenvalue of the infinite-width NTK in order to make the generalization bound in Theorem 3.8 valid and useful. It turns out that there are major difficulties in proving a better lower bound in the general case. However, we are only able to present a better lower bound when we restrict the domain to some (data-dependent) regions by utilizing trainable bias.
**Definition 3.10** (Data-dependent Region).: _Let \(p_{ij}=\mathbb{P}_{w\sim\mathcal{N}(0,I)}[w^{\top}x_{i}\geq B,\ w^{\top}x_{j} \geq B]\) for \(i\neq j\). Define the (data-dependent) region \(\mathcal{R}=\{a\in\mathbb{R}^{n}:\ \sum_{i\neq j}a_{i}a_{j}p_{ij}\geq\min_{i^{ \prime}\neq j^{\prime}}p_{i^{\prime}j^{\prime}}\sum_{i\neq j}a_{i}a_{j}\}.\)_
Notice that \(\mathcal{R}\) is non-empty for any input data-set since \(\mathbb{R}^{n}_{+}\subset\mathcal{R}\) where \(\mathbb{R}^{n}_{+}\) denotes the set of vectors with non-negative entries, and \(\mathcal{R}=\mathbb{R}^{n}\) if \(p_{ij}=p_{i^{\prime}j^{\prime}}\) for all \(i\neq i^{\prime},j\neq j^{\prime}\).
**Theorem 3.11** (Restricted Least Eigenvalue).: _Let \(X=(x_{1},\ldots,x_{n})\) be points in \(\mathbb{R}^{d}\) with \(\left\|x_{i}\right\|_{2}=1\) for all \(i\in[n]\) and \(w\sim\mathcal{N}(0,I_{d})\). Suppose that there exists \(\delta\in[0,\sqrt{2}]\) such that_
\[\min_{i\neq j\in[n]}(\left\|x_{i}-x_{j}\right\|_{2},\left\|x_{i}+x_{j}\right\| _{2})\geq\delta.\]
_Let \(B\geq 0\). Consider the minimal eigenvalue of \(H^{\infty}\) over the data-dependent region \(\mathcal{R}\) defined above, i.e., let \(\lambda:=\min_{\left\|a\right\|_{2}=1,\ a\in\mathcal{R}}a^{\top}H^{\infty}a\). Then, \(\lambda\geq\max(0,\lambda^{\prime})\) where_
\[\lambda^{\prime} \geq\max\left(\frac{1}{2}-\frac{B}{\sqrt{2\pi}},\ \left(\frac{1}{B}-\frac{1}{B^{3}}\right)\frac{e^{-B^{2}/2}}{\sqrt{2\pi}}\right)\] \[-e^{-B^{2}/(2-\delta^{2}/2)}\frac{\pi-\arctan\left(\frac{\delta \sqrt{1-\delta^{2}/4}}{1-\delta^{2}/2}\right)}{2\pi}. \tag{3.2}\]
To demonstrate the usefulness of our result, if we take the bias initialization \(B=0\) in Equation (3.2), this bound yields \(1/(2\pi)\cdot\arctan((\delta\sqrt{1-\delta^{2}/4})/(1-\delta^{2}/2))\approx \delta/(2\pi)\), when \(\delta\) is close to \(0\) whereas (Song et al., 2021) yields a bound of \(\delta/n^{2}\). On the other hand, if the data has maximal separation, i.e., \(\delta=\sqrt{2}\), we get a \(\max\left(\frac{1}{2}-\frac{B}{\sqrt{2\pi}},\ \left(\frac{1}{B}-\frac{1}{B^{3}}\right)\frac{e^{-B^{2}/2}}{\sqrt{2\pi}}\right)\) lower bound, whereas (Song et al., 2021) yields a bound of \(\exp(-B^{2}/2)\sqrt{2}/n^{2}\). Connecting to our convergence result in Theorem 3.1, if \(f(t)-y\in\mathcal{R}\), then the error can be reduced at a much faster rate than the (pessimistic) rate with \(1/n^{2}\) dependence in the previous studies as long as the error vector lies in the region.
**Remark 3.12**.: _The lower bound on the restricted smallest eigenvalue \(\lambda\) in Theorem 3.11 is_ **independent of \(n\)**_, which makes that the worst case generalization bound in Theorem 3.8 be \(O(1)\) under constant data separation margin (note that this is optimal since the loss is bounded). Such a lower bound is much sharper than the previous results with a \(1/n^{2}\) explicit dependence which yields vacuous generalization bound of \(O(n)\). This improvement relies on the condition that the label vector should lie in the region \(\mathcal{R}\), which can be achieved by a simple label-shifting strategy:_
_Since \(\mathbb{R}_{+}^{n}\subset\mathcal{R}\), the condition can be easily achieved by training the neural network on the shifted labels \(y+C\) (with appropriate broadcast) where \(C\) is a constant such that \(\min_{i}y_{i}+C\geq 0\)._
Careful readers may notice that in the proof of Theorem 3.11 in Appendix B, the restricted least eigenvalue on \(\mathbb{R}_{+}^{n}\) is always positive even if the data separation is zero, which would imply that the network can always exhibit good generalization. However, we need to point out that the generalization bound in Theorem 3.8 is meaningful only when the training is successful: when the data separation is zero, the limiting NTK is no longer positive definite and the training loss cannot be minimized toward zero.
### Key Ideas in the Proof of Theorem 3.8
Since each neuron weight and bias move little from their initialization, a natural approach is to bound the generalization via localized Rademacher complexity. After that, we can apply appropriate concentration bounds to derive generalization. The main effort of our proof is devoted to bounding the weight movement to bound the localized Rademacher complexity. If we directly take the setting in Theorem 3.1 and compute the network's localized Rademacher complexity, we will encounter a non-diminishing (with the number of samples \(n\)) term which can be as large as \(O(\sqrt{n})\) since the network outputs non-zero values at the initialization. Arora et al. (2019) and Song and Yang (2019) resolved this issue by initializing the neural network weights instead by \(\mathcal{N}(0,\kappa^{2}I)\) to force the neural network output something close to zero at the initialization. The magnitude of \(\kappa\) is chosen to balance different terms in the Rademacher complexity bound in the end. Similar approach can also be adapted to our case by initializing the weights by \(\mathcal{N}(0,\kappa^{2}I)\) and the biases by \(\kappa B\). However, the drawback of such an approach is that the effect of \(\kappa\) to all the previously established results for convergence need to be carefully tracked or derived. In particular, in order to guarantee convergence, the neural network's width needs to have a polynomial dependence on \(1/\kappa\) where \(1/\kappa\) has a polynomial dependence on \(n\) and \(1/\lambda\), which means their network width needs to be larger to compensate for the initialization scaling. We resolve this issue by symmetric initialization Definition 3.6 which yields no effect (up to constant factors) on previously established convergence results, see (Munteanu et al., 2022). Symmetric initialization allows us to organically combine the results derived for convergence to be reused for generalization, which leads to a more succinct analysis. Further, we replace the \(\ell_{1}\)-\(\ell_{2}\) norm upper bound by finer inequalities in various places in the original analysis. All these improvements lead to the following upper bound of the weight matrix change in Frobenius norm. Further, combining our sparsity-inducing initialization, we present our sparsity-dependent Frobenius norm bound on the weight matrix change.
**Lemma 3.13**.: _Assume the one-hidden layer neural network is initialized by symmetric initialization in Definition 3.6. Further, assume the parameter settings in Theorem 3.1. Then with probability at least \(1-\delta-e^{-\Omega(n)}\) over the random initialization, we have for all \(t\geq 0\),_
\[\left\|[W,b](t)-[W,b](0)\right\|_{F}\] \[\leq\sqrt{y^{\top}(H^{\infty})^{-1}y}+O\left(\frac{n}{\lambda} \left(\frac{\exp(-B^{2}/2)\log(n/\delta)}{m}\right)^{\frac{1}{4}}\right)\] \[\quad+O\left(\frac{n\sqrt{R\exp(-B^{2}/2)}}{\lambda}\right)\]
\[+\frac{n}{\lambda^{2}}\cdot O\left(\exp(-B^{2}/4)\sqrt{\frac{\log(n^{2 }/\delta)}{m}}+R\exp(-B^{2}/2)\right)\]
_where \(R=R_{w}+R_{b}\) denote the maximum magnitude of neuron weight and bias change._
By Lemma A.9 and Lemma A.11 in the Appendix, we have \(R=\widetilde{O}(\frac{n}{\lambda\sqrt{m}})\). Plugging in and setting \(B=0\), we get \(\left\|[W,b](t)-[W,b](0)\right\|_{F}\leq\sqrt{y^{\top}(H^{\infty})^{-1}y}+ \widetilde{O}(\frac{n}{\lambda m^{1/4}}+\frac{n^{3/2}}{\lambda^{3/2}m^{1/4}}+ \frac{n}{\lambda^{2}\sqrt{m}}+\frac{n^{2}}{\lambda^{3}\sqrt{m}})\). On the other hand, taking \(\kappa=1\), (Song and Yang, 2019, Lemma G.6) yields a bound of \(\left\|W(t)-W(0)\right\|_{F}\leq\sqrt{y^{\top}(H^{\infty})^{-1}y}+\widetilde{ O}(\frac{n}{\lambda}+\frac{n^{7/2}\operatorname{poly}(1/\lambda)}{m^{1/4}})\). Notice that the \(\widetilde{O}(\frac{n}{\lambda})\) term has no dependence on \(1/m\) and is removed by symmetric initialization in our analysis. We further improve the upper bound's dependence on \(n\) by a factor of \(n^{2}\).
The full proof of Theorem 3.8 is deferred in Appendix C.
### Key Ideas in the Proof of Theorem 3.11
In this section, we analyze the smallest eigenvalue of the limiting NTK \(H^{\infty}\) with \(\delta\) data separation. We first note that \(H^{\infty}\succeq\mathbb{E}_{w\sim\mathcal{N}(0,I)}\left[\mathbb{I}(Xw\geq B) \mathbb{I}(Xw\geq B)^{\top}\right]\) and for a fixed vector \(a\), we are interested in the lower bound of \(\mathbb{E}_{w\sim\mathcal{N}(0,I)}[\|a^{\top}\mathbb{I}(Xw\geq B)|^{2}]\). In previous works, Oymak and Soltanolkotabi (2020) showed a lower bound \(\Omega(\delta/n^{2})\) for zero-initialized bias, and later Song et al. (2021a) generalized this result to a lower bound \(\Omega(e^{-B^{2}/2}\delta/n^{2})\) for non-zero initialized bias. Both lower bounds have a dependence of \(1/n^{2}\). Their approach is by using an intricate Markov's inequality argument and then proving an lower bound of \(\mathbb{P}[|a^{\top}\mathbb{I}(Xw\geq B)|\geq c\left\|a\right\|_{\infty}]\). The lower bound is proved by only considering the contribution from the largest coordinate of \(a\) and treating all other values as noise. It is non-surprising that the lower bound has a factor of \(1/n\) since \(a\) can have identical entries. On the other hand, the diagonal entries can give a \(\exp(-B^{2}/2)\) upper bound and thus there is a \(1/n^{2}\) gap between the two. Now, we give some evidence suggesting the \(1/n^{2}\) dependence may not be tight in some cases. Consider the following scenario: Assume \(n\ll d\) and the data set is orthonormal. For any unit-norm vector \(a\), we have
\[a^{\top}\mathop{\mathbb{E}}_{w\sim\mathcal{N}(0,I)}\left[\mathbb{ I}(Xw\geq B)\mathbb{I}(Xw\geq B)^{\top}\right]a\] \[=\sum_{i,j\in[n]}a_{i}a_{j}\,\mathbb{P}[w^{\top}x_{i}\geq B,\ w^{ \top}x_{j}\geq B]\] \[=p_{0}\left\|a\right\|_{2}^{2}+p_{1}\sum_{i\neq j}a_{i}a_{j}\] \[=p_{0}-p_{1}+p_{1}\left(\sum_{i}a_{i}\right)^{2}>p_{0}-p_{1}\]
where \(p_{0},p_{1}\in[0,1]\) are defined such that due to the spherical symmetry of the standard Gaussian we are able to let \(p_{0}=\mathbb{P}[w^{\top}x_{i}\geq B],\ \forall i\in[n]\) and \(p_{1}=\mathbb{P}[w^{\top}x_{i}\geq B,w^{\top}x_{j}\geq B],\ \forall i,j\in[n],\ i\neq j\). Notice that \(p_{0}>p_{1}\). Since this is true for all \(a\in\mathbb{R}^{n}\), we get a lower bound of \(p_{0}-p_{1}\) with no explicit dependence on \(n\) and this holds for all \(n\leq d\). When \(d\) is large and \(n=d/2\), this bound is better than previous bound by a factor of \(\Theta(1/d^{2})\). We hope to apply the above analysis to general datasets. However, it turns out that the product terms (with \(i\neq j\)) above creates major difficulties in the general case. Due to such technical difficulties, we prove a better lower bound by utilizing the data-dependent region \(\mathcal{R}\) defined in Definition 3.10. Let \(p_{\min}=\min_{i\neq j}p_{ij}\). Now, for \(a\in\mathcal{R}\), we
have
\[\mathop{\mathbb{E}}_{w\sim\mathcal{N}(0,I)}\left[\left(a^{\top} \mathbb{I}(Xw\geq B)\right)^{2}\right]\] \[\geq\left(p_{0}-p_{\min}\right)\left\|a\right\|_{2}^{2}+p_{\min} \left\|a\right\|_{2}^{2}+p_{\min}\sum_{i\neq j}a_{i}a_{j}\] \[\geq\left(p_{0}-\min_{i\neq j}p_{ij}\right)\left\|a\right\|_{2}^{ 2}.\]
Thus, to lower bound the smallest eigenvalue on this region, we need to get an upper bound on \(\min_{i\neq j}p_{ij}\). To do this, let's first consider a fixed pair of training data \(x_{i}\) and \(x_{j}\) and their associated probability \(p_{ij}\) (see Definition 3.10). To compute \(p_{ij}\), we can decompose \(x_{j}\) into two components: one is along the direction of \(x_{i}\) and the other is orthogonal to \(x_{i}\). Now we can project the Gaussian vector onto these two directions and since the two directions are orthogonal, they are independent. This allows \(p_{ij}\) to be computed via geometry arguments. It turns out that this probability is maximized when the data separation is the smallest. We defer the details of the proof of Theorem 3.11 to Appendix B.
## 4 Experiments
In this section, we verify our result that the activation of neural networks remains sparse during training when the bias parameters are initialized as non-zero.
**Settings.** We train a 6-layer multi-layer perceptron (MLP) of width 1024 with trainable bias terms on MNIST image classification (LeCun et al., 2010). The biases of the fully-connected layers are initialized as \(0,-0.5\) and \(-1\). For the weights in the linear layer, we use Kaiming Initialization (He et al., 2015) which is sampled from an appropriately scaled Gaussian distribution. The traditional MLP architecture only has linear layers with ReLU activation. However, we found out that using the sparsity-inducing initialization, the magnitude of the activation will decrease geometrically layer-by-layer, which leads to vanishing gradients and that the network cannot be trained. Thus, we made a slight modification to the MLP architecture to include an extra Batch
Figure 1: Sparsity pattern on different layers across different training iterations for three different bias initialization. The \(x\) and \(y\) axis denote the iteration number and sparsity level, respectively. The models can achieve \(97.9\%,97.7\%\) and \(97.3\%\) accuracy after training, respectively. Note that, in Figure (a), the lines of layers 1-5 overlap together except layer 0.
Normalization after ReLU to normalize the activation. Our MLP implementation is based on (Zhu et al., 2021). We train the neural network by stochastic gradient descent with a small learning rate 5e-3 to make sure the training is in the NTK regime. The sparsity is measured as the total number of activated neurons (i.e., ReLU outputs some positive values) divided by total number of neurons, averaged over every SGD batch. We plot how the sparsity patterns changes for different layers during training.
**Observation and Implication.** As demonstrated at Figure 1, when we initialize the bias with three different values, the sparsity patterns are stable across all layers during training: when the bias is initialized as \(0\) and \(-0.5\), the sparsity change is within \(2.5\%\); and when the bias is initialized as \(-1.0\), the sparsity change is within \(10\%\). Meanwhile, by increasing the initialization magnitude for bias, the sparsity level increases with only marginal accuracy dropping. This implies that our theory can be extended to the multi-layer setting (with some extra care for coping with vanishing gradient) and multi-layer neural networks can also benefit from the sparsity-inducing initialization and enjoy reduction of computational cost. Another interesting observation is that the input layer (layer 0) has a different sparsity pattern from other layers while all the rest layers behave similarly.
## 5 Discussion
In this work, we study training one-hidden-layer overparameterized ReLU networks in the NTK regime where its biases initialized as some constants rather than zero so that its activation remains sparse during the entire training process. We showed an improved sparsity-dependent results on convergence, generalization and restricted least eigenvalue. One immediate future direction is to generalize our analysis to multi-layer neural networks. On the other hand, in practice, label shifting is never used. Although we show that the least eigenvalue can be much better than previous result when we impose additional assumption of the restricted region, an open problem is whether it is possible to improve the infinite-width NTK's least eigenvalue's dependence on the sample size without such assumption, or even whether a lower bound purely dependent on the data separation is possible so that the worst-case generalization bound doesn't scale with the sample size. We leave them as future work.
|
2306.06929 | Decoding Neutron Star Observations: Revealing Composition through
Bayesian Neural Networks | We exploit the great potential offered by Bayesian Neural Networks (BNNs) to
directly decipher the internal composition of neutron stars (NSs) based on
their macroscopic properties. By analyzing a set of simulated observations,
namely NS radius and tidal deformability, we leverage BNNs as effective tools
for inferring the proton fraction and sound speed within NS interiors. To
achieve this, several BNNs models were developed upon a dataset of $\sim$ 25K
nuclear EoS within a relativistic mean-field framework, obtained through
Bayesian inference that adheres to minimal low-density constraints. Unlike
conventional neural networks, BNNs possess an exceptional quality: they provide
a prediction uncertainty measure. To simulate the inherent imperfections
present in real-world observations, we have generated four distinct training
and testing datasets that replicate specific observational uncertainties. Our
initial results demonstrate that BNNs successfully recover the composition with
reasonable levels of uncertainty. Furthermore, using mock data prepared with
the DD2, a different class of relativistic mean-field model utilized during
training, the BNN model effectively retrieves the proton fraction and speed of
sound for neutron star matter. | Valéria Carvalho, Márcio Ferreira, Tuhin Malik, Constança Providência | 2023-06-12T08:08:32Z | http://arxiv.org/abs/2306.06929v2 | # Decoding Neutron Star Observations: Revealing Composition through Bayesian Neural Networks
###### Abstract
We exploit the great potential offered by Bayesian Neural Networks (BNNs) to directly decipher the internal composition of neutron stars (NSs) based on their macroscopic properties. By analyzing a set of simulated observations, namely NS radius and tidal deformability, we leverage BNNs as effective tools for inferring the proton fraction and sound speed within NS interiors. To achieve this, several BNNs models were developed upon a dataset of \(\sim\) 25K nuclear EoS within a relativistic mean-field framework, obtained through Bayesian inference that adheres to minimal low-density constraints. Unlike conventional neural networks, BNNs possess an exceptional quality: they provide a prediction uncertainty measure. To simulate the inherent imperfections present in real-world observations, we have generated four distinct training and testing datasets that replicate specific observational uncertainties. Our initial results demonstrate that BNNs successfully recover the composition with reasonable levels of uncertainty. Furthermore, using mock data prepared with the DD2, a different class of relativistic mean-field model utilized during training, the BNN model effectively retrieves the proton fraction and speed of sound for neutron star matter.
## I Introduction
The extreme matter conditions inside neutron stars (NSs) are impossible to recreate in terrestrial laboratories, making the equation of state (EoS) of dense and asymmetric nuclear matter (realized inside NSs) an interesting and still unknown quantity. Modeling NS matter is restricted by constraints coming from the observations of massive NSs: PSR J1614-2230 [1; 2; 3] with \(M=1.908\pm~{}0.016M_{\odot}\), PSR J0348 - 0432 with \(M=2.01\pm~{}0.04~{}M_{\odot}\)[4], PSR J0740+6620 with \(M=2.08\pm~{}0.07~{}M_{\odot}\)[5], and J1810+1714 with \(M=2.13\pm~{}0.04~{}M_{\odot}\)[6]. Additionally, theoretical calculations, such as chiral effective field theory (cEFT), are applicable only at very low densities, while perturbative quantum chromodynamics (pQCD) is reliable for extremely high densities. Recently, multi-messenger astrophysics has become an exciting field, allowing to understand NSs physics by connecting information carried by different sources, such as gravitational waves, photons, and neutrinos. The detection by LIGO/Virgo collaboration of compact binary coalescence events, such as GW170817 [7] and GW190425 [8], allowed us to further constrain the EoS of NS matter. Recent results from NICER (Neutron star Interior Composition ExploreR) on the PSR J0030+045 pulsar mass [9; 10] and PSR J0740+6620 radius [11; 12; 13] were relevant in restricting the possible neutron stars physics. Future expected observations from experiments such as the enhanced X-ray Timing and Polarimetry mission (eXTP) [14; 15], the (STROBE-X) [16], and Square Kilometer Array [17] telescope will allow for the determination of NSs radii and masses with a few % uncertainty.
Numerous statistical methods have been extensively explored to determine the most probable EoS based on observational data of NSs. These methods include Bayesian inference [18; 19] and Gaussian processes [20]. However, even if the EoS is known with high precision, the challenge remains in constraining the composition of neutron star matter. Previous studies have highlighted the impossibility of recovering nuclear matter properties solely from the \(\beta\)-equilibrium EoS without knowledge of the compositions (or symmetry energy at high densities) [21; 22; 23] or without information about the EoS of symmetric nuclear matter in conjunction with compositions [24]. However, these studies were either limited to meta-models or based on restricted models. Motivated by this, we aim to construct an artificial neural network that directly maps NS observational properties to EoS composition using a large set of EoS derived from the Relativistic Mean Field (RMF) approach.
Deep learning is another field that has become a buzzword in trying to solve the dense matter EoS problems, and all kinds of physics problems [25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44]. The inference problem of determining the EoS from observational data can be roughly divided into two main categories: the reconstructing of the EoS, pressure or speed of sound, from either mass radius or tidal deformability, [25; 26; 27; 28; 29; 30; 31; 32; 34; 35; 37; 40; 41; 44], or focus directly on the nuclear matter saturation properties [33; 36; 38; 39; 42; 43]. As an example, Fujimoto et. al. [25; 26; 27] explored a framework based on neural networks (NNs) where an observational set of NS mass, radius and respective variances was used as input and the speed of sound squared as output. A similar perspective was followed in [30]. The works [33; 36] fall into the second category: determining specific saturation properties of nuclear matter, in this case, the density dependence of the nuclear symmetry energy, directly from observational NS data.
However, the majority of these NNs based models face a considerable drawback, namely, the lack of uncertainty quantification. Questions such as _how confident is a model about its predictions?_ is the main focus of the present work, in which the uncertainty modeling is explored by implementing
a very appealing approach called Bayesian Neural Networks (BNNs), [45]. BNNs have already started being used in different fields of physics [46; 47], and have been shown useful in uncertainty quantification. Our goal is to implement an inference framework that gives a prediction uncertainty to any model prediction. We analyze the density dependence of the proton fraction and speed of sound inside NS matter. For that, several synthetic datasets made of observational data will be constructed and the impact of adding information on the tidal deformability into the model predictions will be analyzed. Let us make clear an important distinction between our work and the majority of studies: instead of applying a widely used approach of parameterizing the EoS, e.g., with polytropes, we used a specific family of nuclear models to construct a set of possible EoS. Despite their flexibility in exploring the entire region of possible EoS, the generic agnostic parametrizations of the EoS are unable to model and track the different degrees of freedom inside NS. The use of a microscopic model has the crucial advantage of accessing the density dependence of all degrees of freedom and thus unable us to analyze the proton fraction.
The paper is organized as follows. A basic introduction to BNNs is presented in Sec II. The family of nuclear models chosen is presented in Sec. III and also the Bayesian inference framework employed to construct the EoS dataset. The generation of the synthetic observation datasets is explained in Sec. IV. The model results for the proton fraction and speed of sound are discussed in Sec. V, and lastly, the conclusions are drawn in Sec. VII.
## II Bayesian neural networks
Despite the great success of (feedforward) neural networks (NNs) in different fields of science, they come with some drawbacks that require special attention. NNs are susceptible to over-fitting and are unable to access the uncertainty of its predictions, which may lead to overconfident predictions. Bayesian Neural Networks (BNNs) is a Bayesian approach framework that introduces stochastic weights to NNs making them uncertainty-aware models [48].
NNs are capable of representing arbitrary functions and are composed, in their simplest architecture, of a sequence of blocks where a linear transformation is followed by a nonlinear operation (activation functions). To simplify the notation, let us denote a NN by \(\mathbf{y}=f_{\mathbf{\theta}}(\mathbf{x})\), where \(\mathbf{\theta}=(\mathbf{W},\mathbf{b})\) represent all NN weights. The vectors \(\mathbf{W}\) and \(\mathbf{b}\) denote, respectively, the connections (weights) and bias of all linear transformations of the network, which define completely the NN model. Training the NN consists in determining the numerical procedure (back-propagation algorithm) of finding the \(\mathbf{\theta}^{*}\) that minimizes a chosen cost function on the training data. This traditional approach of estimating a single model defined by \(\mathbf{\theta}^{*}\) ignores all other possible parametrizations \(\mathbf{\theta}\).
BNNs simulate multiple possible NNs models by introducing stochastic weights. These networks operate by first choosing a functional model, i.e., a network architecture, and then the stochastic model, i.e., the probability distributions for the weights. Bayesian inference is then required to train the network by defining the likelihood function of the observed data, \(P(D|\mathbf{\theta})\), and the prior probability distribution over the model parameters, \(P(\mathbf{\theta})\). It is then possible to employ Bayes theorem and obtain the posterior probability distribution, i.e. the probability of the model parameters given the data:
\[P(\mathbf{\theta}|D)=\frac{P(D|\mathbf{\theta})P(\mathbf{\theta})}{P(D)} \tag{1}\]
where \(P(D)=\int_{\mathbf{\theta}^{\prime}}P(D|\mathbf{\theta}^{\prime})P(\mathbf{\theta}^{\prime })d\mathbf{\theta}^{\prime}\) is the evidence. Having a distribution on the weights, the BNNs predictions become a Bayesian model average: the probability distribution of some unknown \(\mathbf{y}^{*}\) given an input \(\mathbf{x}^{*}\) is
\[P(\mathbf{y}^{*}|\mathbf{x}^{*},D)=\int_{\mathbf{\theta}}P(\mathbf{y}^{*}|\mathbf{x}^{*},\mathbf{ \theta})P(\mathbf{\theta}|D)d\mathbf{\theta}. \tag{2}\]
\(P(\mathbf{y}^{*}|\mathbf{x}^{*},\mathbf{\theta})\) is considered to be the likelihood of our data, the distribution that comes out of the network and captures the noise present in our data, and \(P(\mathbf{\theta}|D)\) is the posterior distribution of our weights, that brings up the uncertainty on the model. Another advantage of using these networks is that they capture two types of uncertainty, aleatoric uncertainty, uncertainty on tainty on the data, and epistemic uncertainty, uncertainty on the model estimation defined as \(P(\mathbf{y}^{*}|\mathbf{x}^{*},\mathbf{\theta})\) and \(P(\mathbf{\theta}|D)\) respectively. However, solving Eq. 2 is a very complex task because the posterior \(P(\mathbf{\theta}|D)\) depends on the evidence \(P(D)=\int_{\mathbf{\theta}^{\prime}}P(D|\mathbf{\theta}^{\prime})P(\mathbf{\theta}^{\prime })d\mathbf{\theta}^{\prime}\), which is a non-analytic expression that requires marginalizing over all model parameters. Fortunately, there are multiple ways of tracking it by using either Markov Chain Monte Carlo or variational inference (more information can be found in [48]). We are going to implement the variational inference method that is presented in the following.
### Varitional inference formalism
The variational inference method aims to approximate a variational posterior \(q_{\mathbf{\phi}}(\mathbf{\theta})\) to the real posterior \(P(\mathbf{\theta}|D)\) by using the Kullback-Leibler (KL) divergence. The KL divergence is a measure of dissimilarity between two probability distributions. It approaches zero when the variational posterior and the true posterior are identical and is positive otherwise. Fundamentally, we want to find the variational posterior that corresponds to the minimum value of the KL divergence between the variational posterior and the true posterior:
\[q_{\phi^{*}}=\operatorname*{arg\,min}_{q_{\phi}}\ \text{KL}(q_{\phi}(\mathbf{\theta})||P(\mathbf{ \theta}|D)), \tag{3}\]
where the KL divergence is defined by
\[\text{KL}(q_{\phi}(\mathbf{\theta})||P(\mathbf{\theta}|D)) =\mathbb{E}_{q_{\theta}(\mathbf{\theta})}\left[\log\left(\frac{q_{ \phi}(\mathbf{\theta})}{P(\mathbf{\theta}|D)}\right)\right] \tag{4}\] \[=\int_{\mathbf{\theta}}q_{\phi}(\mathbf{\theta})\log\left(\frac{q_{\phi}( \mathbf{\theta})}{P(\mathbf{\theta}|D)}\right)d\mathbf{\theta} \tag{5}\]
In order for the dependence on the true posterior to disappear, the last equation can be rewritten, with the help of Bayes rule Eq. 1, as
\[\text{KL} (q_{\phi}(\mathbf{\theta})||P(\mathbf{\theta}|D))=\int_{\mathbf{\theta}}q_{\phi}( \mathbf{\theta})\log\left(\frac{q_{\phi}(\mathbf{\theta})P(D)}{P(D|\mathbf{\theta})P(\mathbf{ \theta})}\right)d\mathbf{\theta}\] \[=\text{KL}(q_{\phi}(\mathbf{\theta})||P(\mathbf{\theta}))-\mathbb{E}_{q_{ \phi}(\mathbf{\theta})}(\log P(D|\mathbf{\theta}))+\log P(D)\] \[=F(D,\theta)+\log P(D), \tag{6}\]
where \(F(D,\theta)=\text{KL}(q_{\phi}(\mathbf{\theta})||P(\mathbf{\theta}))-\mathbb{E}_{q_{ \phi}(\mathbf{\theta})}(\log P(D|\mathbf{\theta}))\) is called the variational free energy. We end up with
\[\text{KL}(q_{\phi}(\mathbf{\theta})||P(\mathbf{\theta}|D))=F(D,\theta)+\log P(D). \tag{7}\]
As the last term, \(\log P(D)\), does not depend on the variational posterior and his parameters, which we are optimizing, minimizing \(\text{KL}(q_{\phi}(\mathbf{\theta})||P(\mathbf{\theta}|D))\) with respect to \(\phi\) is the same as minimizing \(F(D,\theta)\). ELBO is another important quantity, which stands for evidence lower bound, and it is defined as the negative free energy, i.e., \(\text{ELBO}=-F(D,\theta)\). Equation 7 can then be rewritten as
\[\text{KL}(q_{\phi}(\mathbf{\theta})||P(\mathbf{\theta}|D))=-\text{ELBO}+\log P(D), \tag{8}\]
or
\[\text{ELBO}=-\text{KL}(q_{\phi}(\mathbf{\theta})||P(\mathbf{\theta}|D))+\log P(D). \tag{9}\]
ELBO is called the lower bound of the evidence because \(\text{ELBO}\leq\log P(D)\). In other words, \(\text{KL}(q_{\phi}(\mathbf{\theta})||P(\mathbf{\theta}|D))\) is minimized by maximizing the evidence lower bound.
In the end, our optimization objective resumes to
\[q_{\phi^{\star}} =\operatorname*{arg\,min}_{q_{\phi}}\ \text{KL}(q_{\phi}(\mathbf{ \theta})||P(\mathbf{\theta}|D))\] \[=\operatorname*{arg\,max}_{q_{\phi}}\ \text{ELBO}=\operatorname*{arg \,min}_{q_{\phi}}F(D,\theta)\] \[=\operatorname*{arg\,min}_{q_{\phi}}\left[\text{KL}(q_{\phi}( \mathbf{\theta})||P(\mathbf{\theta}))-\mathbb{E}_{q_{\phi}(\mathbf{\theta})}(\log P(D| \mathbf{\theta}))\right].\]
The above general formalism is applied to our specific case, where we have chosen a multivariate Gaussian for the variational posterior, \(q_{\phi}(\mathbf{\theta})=\mathcal{N}(\mathbf{\mu}_{q},\mathbf{\Sigma}_{q})\), and a multivariate Gaussian with diagonal covariance matrix for the prior, \(P(\mathbf{\theta})=\mathcal{N}(\mathbf{0},\mathbf{I})\). The final loss function we are trying to minimize uses Monte Carlo sampling to obtain the expected values, where \(\mathbf{\theta}^{(n)}\) is being sampled from the variational posterior, \(q_{\phi}(\mathbf{\theta})\), and for our specific model, also the exact value of the KL divergence between two Gaussian's with covariance matrix is used,
\[F(D,\phi) =\frac{1}{2D}\left[(-\log\det(\mathbf{\Sigma}_{q}))-k+\text{tr}\left( \mathbf{\Sigma}_{q}\right)+(\mathbf{\mu}_{q})^{T}(\mathbf{\mu}_{q})\right]\] \[-\frac{1}{B}\sum_{i=1}^{B}\frac{1}{N}\sum_{n=1}^{N}\log P(\mathbf{y}_ {i}|\mathbf{x}_{i},\mathbf{\theta}^{(n)}), \tag{10}\]
where \(B\) is the number of points of the dataset, \(k\) is the dimension of the identity matrix of the prior and \(N\) is the number of samples we take of the variational posterior (we used \(N=10^{4}\)). One aspect that stands out in these networks is how back-propagation works. Without going into much detail, the back-propagation updates \(\mathbf{\mu}_{q}\) and \(\mathbf{\Sigma}_{q}\) for each of the networks's parameters (more information can be found in [49]). Once the network is trained and the best mean \(\mathbf{\mu}_{q}\) and covariance matrix \(\mathbf{\Sigma}_{q}\) are obtained for the distribution of the parameters, Eq. 2 becomes solvable and predictions are obtained using Monte Carlo estimations
\[P(\mathbf{y}^{\star}|\mathbf{x}^{\star},D)= \int_{\mathbf{\theta}}P(\mathbf{y}^{\star}|\mathbf{x}^{\star},\mathbf{\theta})q_{ \phi}(\mathbf{\theta})d\mathbf{\theta} \tag{11}\] \[= \frac{1}{N}\sum_{n=1}^{N}P(\mathbf{y}^{\star}|\mathbf{x}^{\star},\mathbf{ \theta}^{(n)}),\quad\mathbf{\theta}^{(n)}\sim q_{\phi}(\mathbf{\theta}). \tag{12}\]
The mean \(\hat{\mathbf{\mu}}\) and variance \(\hat{\mathbf{\sigma}}^{2}\) vectors of the predicting distribution \(P(\mathbf{y}^{\star}|\mathbf{x}^{\star},D)\) can be calculated, for a fixed \(\mathbf{x}^{\star}\), by applying the law of total expectation and total variance. From
\[\mathbb{E}[\mathbf{y}^{\star}|\mathbf{x}^{\star},D]=\mathbb{E}_{q_{\phi}(\mathbf{\theta})} \left[\mathbb{E}[\mathbf{y}^{\star}|\mathbf{x}^{\star},\mathbf{\theta}]\right] \tag{13}\]
and
\[\operatorname{Var}\left[\mathbf{y}^{\star}|\mathbf{x}^{\star},D\right]= \,\mathbb{E}_{q_{\phi}(\mathbf{\theta})}\left[\operatorname{Var}\left[ \mathbf{y}^{\star}|\mathbf{x}^{\star},\mathbf{\theta}\right]\right]\] \[-\operatorname{Var}_{q_{\phi}(\mathbf{\theta})}\left[\mathbb{E}[\mathbf{y}^ {\star}|\mathbf{x}^{\star},\mathbf{\theta}]\right], \tag{14}\]
we obtain
\[\hat{\mathbf{\mu}}=\frac{1}{N}\sum_{n=1}^{N}\hat{\mathbf{\mu}}_{\mathbf{\theta}_{n}}, \tag{15}\]
and
\[\hat{\mathbf{\sigma}}^{2}=\underbrace{\frac{1}{N}\sum_{n=1}^{N}\hat{\mathbf{\sigma}}_{ \mathbf{\theta}_{n}}^{2}}_{\text{Aleator uncertainty}}+\underbrace{\frac{1}{N}\sum_{n=1}^{N}( \hat{\mathbf{\mu}}_{\mathbf{\theta}_{n}}-\hat{\mathbf{\mu}})\odot(\hat{\mathbf{\mu}}_{\mathbf{ \theta}_{n}}-\hat{\mathbf{\mu}})}_{\text{Epistemic uncertainty}}, \tag{16}\]
where \(\odot\) denotes element-wise multiplication. The predicted variance captures both epistemic and aleatoric uncertainties [50].
## III Nuclear models
A field theoretical approach is adopted to calculate a dataset of nuclear equations of state (EoSs). The approach incorporates self-interactions and mixed meson terms within a relativistic mean field (RMF) description. A wide and reasonable region of the parameter space is considered, providing an accurate representation of presently known nuclear properties. Including non-linear terms is crucial for determining the density dependence of the EoS. In this treatment, the nucleons interact through the exchange of scalar-isoscalar mesons (\(\sigma\)), vector-isoscalar mesons (\(\omega\)), and vector-isovector mesons (\(\rho\)). The Lagrangian governing the baryonic degrees of freedom
can be expressed as follows: \(\mathcal{L}=\mathcal{L}_{N}+\mathcal{L}_{M}+\mathcal{L}_{NL}\) with
\[\mathcal{L}_{N}= \bar{\Psi}\Big{[}\gamma^{\mathbf{\mu}}\left(i\partial_{\mathbf{\mu}}-g_{ \omega}\omega_{\mathbf{\mu}}-g_{\varrho}\mathbf{t}\cdot\mathbf{\varrho}_{\mathbf{\mu}}\right)\] \[-\left(m-g_{\sigma}\sigma\right)\Big{]}\Psi\] \[\mathcal{L}_{M}= \frac{1}{2}\left[\partial_{\mathbf{\mu}}\sigma\partial^{\mathbf{\mu}} \sigma-m_{\sigma}^{2}\sigma^{2}\right]\] \[-\frac{1}{4}F_{\mathbf{\mu}\nu}^{(\omega)}F^{(\omega)\mathbf{\mu}\nu}+ \frac{1}{2}m_{\omega}^{2}\omega_{\mathbf{\mu}}\omega^{\mathbf{\mu}}\] \[-\frac{1}{4}\mathbf{F}_{\mathbf{\mu}\nu}^{(\varrho)}\cdot\mathbf{F}^{(\varrho )\mathbf{\mu}\nu}+\frac{1}{2}m_{\varrho}^{2}\mathbf{\varrho}_{\mathbf{\mu}}\cdot\mathbf{ \varrho}^{\mathbf{\mu}}\] \[\mathcal{L}_{NL}= -\frac{1}{3}b\,m\;g_{\sigma}^{3}(\sigma)^{3}-\frac{1}{4}cg_{ \sigma}^{4}(\sigma)^{4}+\frac{\xi}{4!}g_{\omega}^{4}(\omega_{\mathbf{\mu}}\omega^{ \mathbf{\mu}})^{2}\] \[+\Lambda_{\omega}g_{\varrho}^{2}\mathbf{\varrho}_{\mathbf{\mu}}\cdot\mathbf{ \varrho}^{\mathbf{\mu}}g_{\omega}^{2}\omega_{\mathbf{\mu}}\omega^{\mathbf{\mu}},\]
where the field \(\Psi\) denotes the Dirac spinor that describes the nucleon doublet (neutron and proton) with a bare mass \(m\), \(\gamma^{\mathbf{\mu}}\) are the Dirac matrices, and \(\mathbf{t}\) is the isospin operator. The vector meson tensors are defined as \(F^{(\omega,\cdot\cdot)\mathbf{\mu}\nu}=\partial^{\mathbf{\mu}}A^{(\omega,\cdot\cdot)\bm {\nu}}-\partial^{\nu}A^{(\omega,\cdot)\mathbf{\mu}}\). The couplings of the nucleons to the meson fields \(\sigma\), \(\omega\), and \(\varrho\) are denoted by \(g_{\sigma}\), \(g_{\omega}\), and \(g_{\varrho}\), respectively. The meson masses are given by \(m_{\sigma}\), \(m_{\omega}\), and \(m_{\varrho}\). More information on the specifics of the model can be found in [19] and references therein. The parameters \(g_{\sigma}\), \(g_{\omega}\), \(g_{\rho}\), \(b\), \(c\), \(\xi\), and \(\Lambda_{\omega}\) are systematically sampled within a Bayesian framework, adhering to minimal constraints imposed by several nuclear saturation properties. Furthermore, these parameters are subject to the conditions of the neutron star maximum mass exceeding \(2M_{\odot}\), as well as the EoS for low-density pure neutron matter, which is meticulously generated through a precise N3LO calculation in chiral effective field theory. A detailed discussion on these aspects will be presented in the subsequent subsection.
### The Bayesian setup
Based on observed or fitted data, a prior belief (expressed as a prior distribution) is updated using Bayesian inference. The posterior distribution is derived according to Bayes' theorem [51]. In order to establish a Bayesian parameter optimization system, four key components must be defined: the prior, the likelihood function, the fit data, and the sampler.
_The Prior:-_ A broad range of nuclear matter saturation properties is carefully considered in the prior domain of the adopted RMF model. As a result of Latin hypercube sampling, we determine the prior range in our Bayesian setup. Uniform priors are chosen for each parameter, as described in Table 1.
_The Fit Data:-_ The fit data, presented in Table 2, include the nuclear saturation density \(\rho_{0}\), the binding energy per nucleon \(\epsilon_{0}\), the incompressibility coefficient \(K_{0}\), and the symmetry energy \(J_{\text{sym},0}\), all evaluated at \(\rho_{0}\). Additionally, we incorporate the pressure of pure neutron matter (PNM) at densities of 0.08, 0.12, and 0.16 fm\({}^{-3}\) from N\({}^{3}\)LO calculations in chiral effective field theory (\(\chi\)EFT) [52], accounting for 2\(\times\) N\({}^{3}\)LO data uncertainty. Furthermore, the likelihood also includes the requirement of the neutron star maximum mass exceeding 2.0 \(M_{\odot}\) with uniform probability.
_The Log-Likelihood:-_ We optimize a log-likelihood function as a cost function for the given fit data in Table 2. Equation 17 represents the log-likelihood function, taking into account the uncertainties \(\sigma_{j}\) associated with each data point \(j\). The maximum mass of neutron stars is treated differently, using step function probability,
\[Log(\mathcal{L})\propto-\sum_{j}\left\{\left(\frac{d_{j}-m_{j}(\mathbf{\theta})}{ \sigma_{j}}\right)^{2}+Log(2\pi\sigma_{j}^{2})\right\}. \tag{17}\]
To populate the six-dimensional posterior, we employ the nested sampling algorithm [53], specifically the PyMultinest sampler [54; 55], which is well-suited for low-dimensional problems. The EoS dataset for subsequent analyses will be generated using the full posterior, which contains 25287 EoS.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \multicolumn{2}{c}{Constraints} & \\ Quantity & & Value/Band & Ref \\ \hline & \(\rho_{0}\) & \(0.153\pm 0.005\) & [56] \\ & \(\epsilon_{0}\) & \(-16.1\pm 0.2\) & [57] \\ & \(K_{0}\) & \(230\pm 40\) & [58; 59] \\ & \(J_{\text{sym},0}\) & \(32.5\pm 1.8\) & [60] \\ \multicolumn{2}{c}{PNM} & & \\
[MeV fm\({}^{-3}\)] & \(P(\rho)\) & \(2\times\) N\({}^{3}\)LO & [52] \\ & \(dP/d\rho\) & \(>0\) & \\ & [\(M_{\odot}\)] & \(M_{\text{max}}\) & \(>2.0\) & [5] \\ \hline \end{tabular}
\end{table}
Table 2: The Bayesian inference imposes constraints on various quantities to generate sets of models. These constraints include the binding energy per nucleon \(\epsilon_{0}\), incompressibility \(K_{0}\), and symmetry energy \(J_{\text{sym},0}\) at the nuclear saturation density \(\rho_{0}\), each with a 1\(\sigma\) uncertainty. Additionally, the pressure of pure neutron matter (PNM) is considered at densities of 0.08, 0.12, and 0.16 fm\({}^{-3}\), obtained from a \(\chi\)EFT calculation [52]. The likelihood incorporates a 2\(\times\) N\({}^{3}\)LO uncertainty for the PNM pressure, noting that it increases with density. Furthermore, the maximum mass of neutron stars is constrained to be above \(2M_{\odot}\).
\begin{table}
\begin{tabular}{c c c c} \hline \hline & & \multicolumn{2}{c}{_Set 0_} \\ \cline{3-4} No & Parameters & min & max \\ \hline
1 & \(g_{\sigma}\) & 6.5 & 15.5 \\
2 & \(g_{\omega}\) & 6.5 & 15.5 \\
3 & \(g_{\varrho}\) & 6.5 & 16.5 \\
4 & \(B\) & 0.5 & 9.0 \\
5 & \(C\) & -5.0 & 5.0 \\
6 & \(\xi\) & 0.0 & 0.04 \\
7 & \(\Lambda_{\omega}\) & 0 & 0.12 \\ \hline \end{tabular}
\end{table}
Table 1: We use a uniform prior range for the parameters of the RMF models. Specifically, B and C are \(b\times 10^{3}\) and \(c\times 10^{3}\), respectively. Distribution minimums and maximums are indicated by ’min’ and ’max’ respectively.
Dataset
Our goal is to train BNNs models to predict the speed of sound and proton fraction of NS matter from a given set of NS mock observations. To understand the effect of different NS properties on the prediction uncertainty, we generate different datasets using the 25287 EoS that were obtained through the Bayesian analysis formalism (see Sec. III.1).
### Structure
Our BNN model, \(P(\mathbf{Y}|\mathbf{X},\mathbf{\theta})\), attributes a probability distribution to \(\mathbf{Y}\) (output space) given a set of NS mock observations \(\mathbf{X}\) (input space), where \(\mathbf{Y}\) denotes the different NS matter properties under study, i.e., the speed of sound \(\mathbf{v_{s}^{2}}(\mathbf{n})\) and proton fraction \(\mathbf{y_{p}}(\mathbf{n})\). We have chosen to characterize each element of \(\mathbf{Y}\) at \(15\) fixed baryonic densities \(n_{k}\), e.g., \(\mathbf{y_{p}}(\mathbf{n})=[y_{p}(n_{1}),y_{p}(n_{2}),...,y_{p}(n_{15})]\). The density points are equally spaced between \(n_{1}=0.15\) fm\({}^{-3}\) and \(n_{15}=1.0\) fm\({}^{-3}\), \(n_{k}=\{0.15,0.21,0.27,...,1.0\}\) fm\({}^{-3}\). The number of points (\(N_{Y}=15\)) was selected as a trade-off between computational training time and the interpolation accuracy, i.e. low residuals between the interpolation and the real values. We illustrate this discretization process of the output space elements for the proton fraction in Fig. 1.
Regarding the structure of the input space \(\mathbf{X}\), two different element structures are studied: i) \(\mathbf{X}=[M_{1},...,M_{5},R_{1},...,R_{5}]\) corresponding to five \(M_{i}(R_{i})\) simulated observations and ii) \(\mathbf{X}=[M_{1},...,M_{5},R_{1},...,R_{5},M_{1}^{{}^{\prime}},...,M_{5}^{{}^{ \prime}},\Lambda_{1},....,\Lambda_{5}]\) corresponding to five \(M_{i}(R_{i})\) and five \(\Lambda_{j}(M_{j}^{{}^{\prime}})\) simulated observations. In summary, the output elements \(\mathbf{Y}_{i}\) of our datasets are specified by 15-dimensional vectors and the input space elements \(\mathbf{X}_{i}\) by 10-dimensional or 20-dimensional vectors, depending on the dataset type under study. The statistical procedure for generating the different synthetic observational datasets is presented in the following.
### Generation
The first step of the generation of the datasets consists in randomly splitting the total number of EoS into train and test sets in a proportion of 80%/20%, i.e., the train set contains 22758 EoS while the test has 2529 EoS. Secondly, we generate two types of datasets that share the \(\mathbf{Y}_{i}\) structure but with different \(\mathbf{X}_{i}\) structures:
\[\mathbf{X_{i}} =[M_{1},...,M_{5},R_{1},...,R_{5}]\] \[\mathbf{X_{i}^{{}^{\prime}}} =[M_{1},...,M_{5},R_{1},...,R_{5},M_{1}^{{}^{\prime}},...,M_{5}^{ {}^{\prime}},\Lambda_{1},....,\Lambda_{5}].\]
These different structures allows us to compare and access how informative is the tidal deformability on the model predictions. The statistical generating procedure is composed of the following steps. For each EoS, we randomly select 5 NS mass values, \(M_{i}^{(0)}\), from a uniform distribution between \(1.0M_{\odot}\) and \(M_{\text{max}}\). Then, the radius \(R_{i}\) is sampled from a Gaussian distribution centred at the TOV solution, denoted as \(R(M_{i}^{(0)})\), and with a standard deviation of \(\sigma_{R}\). Finally, we sample the final NS mass from a Gaussian distribution centered at \(M_{i}^{(0)}\) and a standard deviation of \(\sigma_{M}\). The above process can be summarized in the following equations:
\[M_{i}^{(0)} \sim\mathcal{U}[1,M_{\text{max}}]\quad(\text{in units of M}_{\odot}) \tag{18}\] \[R_{i} \sim\mathcal{N}\left(R\left(M_{i}^{(0)}\right),\sigma_{R}^{2}\right)\] (19) \[M_{i} \sim\mathcal{N}\left(M_{i}^{(0)},\sigma_{M}^{2}\right),\quad i=1,...,5 \tag{20}\]
The final generated elements consists of \(\mathbf{X_{i}}=[M_{1},...,M_{5},R_{1},...,R_{5}]\), and each one is a possible realization (_observation_) that characterizes the \(M(R)\) diagram of the specific EoS. This procedure is similar to the one used in [25], where a Gaussian noise was applied to 15 values from \(M(R)\) curve, and then shifted from the original mass radius curve: \(M_{i}=M_{i}^{(0)}+\mathcal{N}\left(0,\sigma_{M}^{2}\right)\) and \(R_{i}=R\left(M_{i}^{(0)}\right)+\mathcal{N}\left(0,\sigma_{R}^{2}\right)\), for \(i=1,...,15\). The second kind of datasets includes the tidal deformability and has the additional steps
\[M_{j}^{{}^{\prime}} \sim\mathcal{U}[1,M_{\text{max}}]\quad(\text{in units of M}_{\odot}) \tag{21}\] \[\Lambda_{j} \sim\mathcal{N}\left(\Lambda(M_{j}^{{}^{\prime}}),\sigma_{ \Lambda}^{2}(M_{j}^{{}^{\prime}})\right)\quad j=1,...,5, \tag{22}\]
where \(\Lambda(M_{j}^{{}^{\prime}})\) is given by the \(\Lambda(M)\) relation of the specific EoS and \(\sigma_{\Lambda}(M_{j}^{{}^{\prime}})\) describes an overall dispersion around the mean value. Samples values with \(\Lambda<0\) were discarded. The functional form \(\sigma_{\Lambda}(M_{j}^{{}^{\prime}})\) should reflect our expectation on the mock observational uncertainty as a function of the NS mass. In the present work, we considered \(\sigma_{\Lambda}(M_{j}^{{}^{\prime}})=\text{constant}\times\hat{\sigma}(M_{j}^ {{}^{\prime}})\), where \(\hat{\sigma}(M_{j}^{{}^{\prime}})\) is the standard deviation of \(\Lambda(M)\) determined from dataset of EoSs. The generated point is \(\mathbf{X_{i}}=[M_{1},...,M_{5},R_{1},...,R_{5},M_{1}^{{}^{\prime}},...,M_{5}^{{ }^{\prime}},\Lambda_{1},....,\Lambda_{5}]\) (a similar approach can be found in [37]). In the above procedures, there is an additional parameter, which we denote by \(n_{\text{s}}\), that specifies the number of mock observations for
each EoS, i.e., the number of times the above procedures are applied to each EoS. For instance, choosing \(n_{s}=20\) would mean running the above procedures 20 times for each EoS (20 _observations_), and thus obtaining \(\{\mathbf{X_{1}},\mathbf{X_{2}},...,\mathbf{X_{20}}\}\).
Applying the above formalism to both the train and test sets, we have generated a total of 4 datasets whose properties are displayed in Table 3. Sets 1 and 2 only contain information about the NS radii (input space \(\mathbf{X}\) is 10-dimensional) while sets 3 and 4 also include the tidal deformability (input space \(\mathbf{X}\) is 20-dimensional). The analysis of sets 1 and 2 allows to understand how a decrease in the spread of the mock observations around the TOV solution affects the predictions and uncertainties. In the same manner, sets 3 and 4 aim to understand possible effects on the model predictions arising from an increase of simulated observations scattering around the mean value on the tidal deformability. We use 60 mock observations, \(n_{s}=60\), on the training sets for each EoS while \(n_{s}=1\) was employed for the test sets. This key difference tries to simulate a real case scenario in which we only have access to a single mock observation of the _true_ EoS. Here, by single mock observation, we mean \(n_{s}=1\) that corresponds to five \(M_{i}(R_{i})\) mock observations (sets 1 and 2) or five \(M_{i}(R_{i})\) mock observations and five \(\Lambda_{j}(M_{j})\) mock observations (sets 3 and 4). To illustrate the dataset generation, Fig. 2 displays the 60 mock observations for two distinct EoSs that belong to the generated datasets (dataset 1 and 2 on the right and left figures, respectively). Note that, for each EoS, there are 300 points in the \(M(R)\) diagram, i.e., consisting of 60 EoS simulated observations for 5 NS mock observations each. It is clear the increase of \(\sigma_{R}\) from dataset 1 to 2, highlighting the differences between the two datasets. Furthermore, we provide Figure 3 to depict the tidal deformability values for datasets 3 and 4.
### Training procedure
To assess the response of Bayesian Neural Networks (BNNs) to varying input noises and output targets, we have conducted experiments involving the training of diverse functional and stochastic models (as explained in Section II). These BNN models were trained using distinct datasets generated as outlined in Section IV.2. During the training stage, a subset of the training data was randomly selected as validation step, i.e., the training data was split into 80% actually for training and 20% for validation. Moreover, the input data \(\mathbf{X}\) was standardized.
Defining the functional models involves adjusting the number of neurons, layers, and activation functions. Table 4 shows the best functional models for each dataset mentioned in Table 3. For the hidden layers, we explore hyperbolic tangent, softplus, and sigmoid activation functions, while utilizing a linear activation function for the output layer. As the input vector sizes differ (10 for sets 1 and 2, and 20 for sets 3 and 4), we employ more neurons per layer for larger input spaces. This is due to the increased complexity demanded by a greater number of parameters. Specifically, we use 15 and 10 neurons for sets 1 and 2, and 20 and 25 neurons for sets 3 and 4, respectively. The output layer consistently contains 30 neurons, with 15 representing the mean and 15 representing the standard deviation of the output probability distribution function. It is worth noting that we deliberately excluded the use of correlation in the output layer due to the inferior performance observed when attempting to incorporate it. As a result, the output layer is solely focused on capturing the mean and standard deviation information of the output distribution. The architecture employed in this study involved
Figure 3: The \(n_{s}=60\) mock observations generated in the \(\Lambda-M\) diagram for two EoSs in dataset 3 (left) and dataset 4 (right). The grey area represents the extremes of our EoS dataset. The two EoS coincide with the ones used in Fig. 1.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Dataset & \(\sigma_{M}\)\([M_{\odot}]\) & \(\sigma_{R}\) [km] & \(\sigma_{\Lambda}(M_{j})\) \\ \hline
1 & 0.05 & 0.15 & — \\ \hline
2 & 0.1 & 0.3 & — \\ \hline
3 & 0.1 & 0.3 & 0.5\(\hat{\sigma}(M_{j})\) \\ \hline
4 & 0.1 & 0.3 & 2\(\hat{\sigma}(M_{j})\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Generation parameters for each dataset. \(\hat{\sigma}(M_{j})\) denotes the standard deviation of \(\Lambda(M)\) calculated on the train set.
Figure 2: The \(n_{s}=60\) mock observations generated for two EoSs in dataset 1 (left) and dataset 2 (right). The grey area represents the extremes of our EoS dataset. The two EoS coincide with the ones used in Fig. 1.
utilizing two to three hidden layers. The number of neurons within each hidden layer remained consistent, but it varied depending on the size of the input vector, as explained earlier. During the grid search process, we systematically explored four different architectures for each output. However, we narrowed our focus to datasets 1 and 3, as we specifically aimed to identify the most suitable architecture for the two different input sizes. The best outcomes are obtained by employing sigmoid as the activation function in the hidden layers, ensuring minimal loss and preventing divergence. Across all eight dataset configurations, i.e., two outputs (\(v_{s}^{2}\) and \(y_{p}\)) and four datasets, see Table 3, it was found that the optimal number of hidden layers is two. For sets 1 and 2, the best performance was achieved with 15 neurons in each hidden layer, while for sets 3 and 4, 25 neurons were utilized in each hidden layer. Detailed information on these configurations can be found in Table 4 for the two output variables. During training, we employ a learning rate of 0.001 and utilize the ADAM optimizer [61] with the AMSgrad improvement [62]. The models are trained for 4000 epochs, with a mini-batch size of 768.
Regarding the stochastic model, we adopt a Gaussian prior with mean zero and standard deviation of one as mentioned in Section II.1. While this prior choice lacks a specific theoretical justification, it serves as a reasonable default prior, as discussed in [48]. Future research could delve further into investigating the impact of prior parameters, similar to the approach taken in reference [46]. Additionally, we select a multivariate normal distribution as the variational posterior as explained in Section II.1, initialized with mean 0 and a diagonal covariance matrix, where the standard deviation is equal to \(\log(1+\exp{0})=0.693\). Furthermore, we opt for a deterministic output layer instead of a probabilistic one, as it has demonstrated improved results in our experiments. This decision is motivated by the fact that the deterministic output layer aligns better with the specific requirements of our model architecture and the nature of the problem we are addressing. All BNNs models were coded using Tensorflow library [63], more specifically we use Keras [64], an high-level API of the TensorFlow.
## V Results: neutron star matter properties
In the following, we discuss the results for the speed of sound squared \(v_{s}^{2}(n)\) and proton fraction \(y_{p}(n)\). To analyze how the observational uncertainty in \(R\) and \(\Lambda\) affects the model confidence, we are going to compare the results from the different models of Tab. 4, which were trained on the different datasets presented in Tab. 3. For the sake of simplicity, whenever we want to distinguish the different 8 BNNs models, we will, hereafter, refer to their training (data)sets, 1 to 4, and predicting quantity, \(v_{s}^{2}(n)\) or \(y_{p}(n)\). For instance, when we say the results of dataset 2 for \(y_{p}(n)\), we mean the BNN model trained on dataset 2 that has the proton fraction as the target quantity.
### Speed of sound
Let us start by illustrating the different BNNs predictions for the speed of sound squared on a randomly selected EoS from the test set. The results are displayed in Figure 4, where the top panel shows sets 1 (blue) and 2 (orange) models while the below figure shows sets 3 (green) and 4 (purple). Model predictions are computed using Eq. 2, from which we show the mean values (solid lines) and \(2\sigma\) regions (color regions). The upper plot clearly shows that the prediction uncertainty, characterized by the distribution's standard deviation \(\sigma\), is smaller on set 1 than on set 2, and most importantly, the predicted mean values are close to the real values. The same pattern is seen in the lower figure: set 3 model (purple) has a lower prediction variance than set 4. While a deeper understanding of the overall behaviour requires that we analyze the whole test set, Fig. 4 already indicates that the BNNs models are able to capture the characteristics of the different datasets (see Table 3): the increased dispersion of the NS mock observations around the true values translate into a larger uncertainty when inferring the corresponding EoS properties.
To investigate how the models predictions behave over the entire test set, we define the normalized residuals' predictions as \(\Gamma(n_{k})=\left(v_{s}^{2}(n_{k})-v_{s}^{2}(n_{k})^{\text{true}}\right)/ \sigma(n_{k})\) and the dispersion by \(\Sigma(n_{k})=\sigma(n_{k})\) at each of the prediction densities, i.e., \(k=1,...,15\). A summary statistics of both quantities over all EoS of the test set is shown in Fig. 5. The distribution of \(\Gamma(n)\) (top panel), for all 4 datasets, has 50 % of the values near zero, indicating that the median of the prediction values is unbiased. Furthermore, at the 2.3 % and 97.7% of the cumulative percentage, we can see that the distribution lies between -2\(\sigma\) and 2\(\sigma\) respectively, indicating that the prediction mean deviates from the true value less than 2\(\sigma\) 95% of the times. The fact that the distribution properties of \(\Gamma(n)\) are similar across all datasets and independent of the density reveals that the BNNs models are correctly modelling the dispersion of the predictions considering the corresponding mean residuals at each density value. The value of \(\Sigma(n)\) (bottom panel) grows with increasing density reflecting the training set statistics: the EoS dataset was
\begin{table}
\begin{tabular}{c|c|c|c} \multirow{2}{*}{**Layers**} & \multirow{2}{*}{**Activation**} & \multicolumn{2}{c}{**Neurons**} \\ \cline{3-4} & & **Dataset 1 \& 2** & **Dataset 3 \& 4** \\ \hline Input & N/A & 10 & 20 \\ \hline Hidden Layer 1 & Sigmoid & 15 & 25 \\ \hline Hidden Layer 2 & Sigmoid & 15 & 25 \\ \hline Output & Linear & 30 & 30 \\ \hline \end{tabular}
\end{table}
Table 4: Structures of the final BNN models. The \(v_{s}^{2}(n)\) and \(y_{p}(n)\) models have the same structure.
generated by Bayesian inference where saturation properties were imposed, leading to a wider uncertainty at higher densities while low density regions are strongly constrained. From the \(\Sigma(n)\) plot, we see that the whole distribution of the BNN model trained on set 1 (blue line) shifts to lower values, specifically on \(2.3\%\) and \(50\%\) of the values, showing that there is a considerable decrease in uncertainty when the dispersion of NS mock observations is reduced by a factor of two, from (\(\sigma_{M}=0.1M_{\odot},\sigma_{R}=0.3\) km) to (\(\sigma_{M}=0.05M_{\odot},\sigma_{R}=0.15\) km) (see Table 3).
To estimate the overall performance of the different BNNs models, we show the coverage probability of each model in Fig. 6. The coverage probability quantifies how the model is perceiving the distribution of the data, by determining if the percentage of values contained in \(1\sigma\) of the output distribution, i.e., the number of values in a specific interval divided by the total number of values, corresponds to 68 % of the number of values we are using for the test set. The same analogy is repeated for \(2\sigma\) (95%) and \(3\sigma\) (99.7%), this was implemented for each of the 15 values of the output independently, obtaining the respective coverage probabilities relative to the output densities, and then we estimated the mean of this 15 coverage probabilities represented in Fig. 6 for the 4 sets. The overall results show that the model is correctly estimating the distribution of the data, since the bars are very close to the three percentages, a little fluctuation can be seen in the 68%, where the 4 sets are overestimating the uncertainty of the results, meaning, we have a percentage
Figure 4: The BNNs predictions for \(v_{s}^{2}(n)\) using one EoS of the test. The models trained on datasets 1 (blue) and 2 (orange) are in the upper figure while datasets 3 (purple) and 4 (green) models are in the lower figure. The prediction mean values (solid lines) and \(2\sigma\) confidence intervals are shown. The true values are shown in black dots and the range of \(v_{s}^{2}(n)\) from the train set is indicated by the grey region.
of data bigger than 68% in \(1\sigma\) of the model, so the \(\sigma\) of the model should be smaller, this can be seen even more for set 3.
To quantify how the increase in the mock observational scattering of \(\left(R,\Lambda\right)\) affects the model prediction uncertainties, let us define the following quantity
\[\eta\left[a,b\right]\left(n_{k}\right)=\frac{1}{T}\left(\sum_{i=1}^{T}\frac{ \sigma_{i}^{b}(n_{k})-\sigma_{i}^{a}(n_{k})}{\sigma_{i}^{a}(n_{k})}\right) \times 100, \tag{23}\]
where \(T\) is the total number of EoS in the test set, and \(k=1,...,15\). This quantity defines the percentage uncertainty deviation between models \(b\) and \(a\) at density \(n_{k}\). Figure 7 shows the results, where we plotted 5 different comparisons. The first conclusion, looking at the result for \(\eta\left[2,1\right]\) (cyan), is that the prediction uncertainty increased when we consider the BNN model of dataset 1 compared with the one from dataset 2. Furthermore, the \(\eta\left[2,1\right]\) reached the maximum value of 20% at \(n_{4}=0.33\) fm\({}^{-3}\). In other words, when the (synthetic) observational data scattering doubles, from \(\left(\sigma_{M}=0.05M_{\odot},\sigma_{R}=0.15\text{ km}\right)\) to \(\left(\sigma_{M}=0.1M_{\odot},\sigma_{R}=0.3\text{ km}\right)\), the uncertainty increases of the order 4.9-20%, having the largest value at the densities where the NS radius, \(R(M)\), and \(v_{s}^{2}(n)\) have the highest correlation (see Anne A for details and ref. [65]). The region of larger sensitivity to the uncertainty of the mock observational data coincides with the density interval where the speed of sound increases steadily, and, in many agnostic approaches attains a maximum followed by a decrease or flattening at larger densities [66; 67; 68]. The nucleonic EoS that has been used to train the model also shows similar behavior for the speed of sound [19]. This behavior of the speed of sound for densities below three times saturation density is dictated by the two solar mass constraints.
The second important conclusion is the impact of adding the \(\Lambda\) information on the inference properties. This point is clear when analysing the dependence and values of \(\eta\left[3,2\right]\) (the difference between datasets 2 and 3 is that the latter contains information on the tidal deformability, see Tab. 3). The negative values reflect the fact that the prediction uncertainty decreases when the tidal information is added to the training procedure - the tidal deformability is informative of the \(v_{s}^{2}(n)\) of neutron star matter. The maximum value of the uncertainty decrease is 7%, occurring at \(n_{6}=0.45\) fm\({}^{-3}\). Similarly, when considering \(\eta\left[4,2\right]\) (blue), it also exhibits negative values throughout, indicating a decrease in uncertainty compared to dataset 2. However, beyond the seventh density, the two values become very close to each other, suggesting that no additional information is being perceived by including more dispersion on the tidal deformability. Looking at \(\eta\left[4,3\right]\) and comparing it with \(\eta\left[2,1\right]\), we see that for the first density, they differ very little, but from there on \(\eta\left[2,1\right]\) is approximately always more than 5 times bigger. Noting an important consideration here, the proportion of input values being altered: dataset 3 and 4 involves changing only the uncertainty of 5 quantities out of the 20 input values, i.e. changing a quarter of our input vector, while in dataset 1 and 2 we have changed all the input values. So we could anticipate an at least four-fold increase in \(\eta\) for datasets 1 and 2, however, this percentage is the majority of times even bigger. This implies that the dispersion of the mass-radius pairs has a more significant impact on the model compared to the mass-tidal deformability pairs. By acknowledging this difference in the proportion of modified input values, we gain insight into how the model's understanding and response to dispersion changes are influenced.
### Proton fraction
Let us now analyze the model's predictions for the proton fraction \(y_{p}(n)\). Using a specific EoS from the test dataset (for illustration purposes), we show in Fig. 8 the models' prediction for each dataset: 1 (blue) and 2 (orange) on the upper
Figure 6: Coverage probability calculated on the test set of the \(v_{s}^{2}(n)\) BNNs models.
Figure 7: Prediction uncertainty deviation \(\eta\left[a,b\right]\) between the \(v_{s}^{2}(n)\) BNN models \(a\) and \(b\) (see text for details).
panel and 3 (purple) and 4 (green) in the lower panel. The range of \(y_{p}(n)\) from the train set is indicated by the grey region and the dashed grey line displays the \(99.9\%\) percentage of data/probability line. The train statistics is important to point out that the upper region between the \(99.9\%\) probability and maximum boundary lines is caused by the presence of just one _extreme_ EoS.
The conclusion drawn from Fig. 8 is similar to the \(v_{s}^{2}(n)\) results (see Fig. 4): it is evident that dataset 1 exhibits the narrowest prediction uncertainty, whereas dataset 2 demonstrates a considerable increase in uncertainty; the prediction uncertainties are similar for datasets 3 and 4. Definite conclusions require some statistics over a set of EoS, and, for that purpose, we are going to analyze the whole test set once again.
Figure 9 shows the following quantities over the four datasets: the model residuals \(\delta(n_{k})=y_{p}(n_{k})-y_{p}(n_{k})^{\text{true}}\) (left), the standard deviation \(\Sigma(n_{k})=\sigma(n_{k})\) (center), and the normalized model residuals \(\Gamma(n_{k})=\left(y_{p}(n_{k})-y_{p}(n_{k})^{\text{true}}\right)/\sigma(n_{k})\) (right). We calculate these quantities, at each density \(n_{k}\), for each of the 2529 EoS from the test set. We see that the model has a broader residual spread around 0.4-0.5 fm\({}^{-3}\) that is counterbalanced by larger standard deviation values in this region - this is a direct conclusion when looking at the normalized model residuals steadiness. In other words, the model correctly captures and models the data statistics: the BNN models capture larger prediction uncertainties in regions where \(y_{p}(n)\) has a larger dispersion, as expected. The overall quality of the models prediction is seen in the density independence of the \(\Gamma(n)\) (right panel) statistics, the models residuals are \(95\%\) of the times between \(2\sigma\). The insight of the train set statistics, in Fig. 10, entails interpretability for \(\delta(n)\) and particularly \(\Sigma(n)\) behaviour, since it becomes clearer that \(\sigma(n)\) of the train set has a non-monotonic behaviour, reaching a maximum value around 0.5 fm\({}^{-3}\) decreasing for lower/larger densities. The coverage probability for the \(y_{p}(n)\) models are similar to the ones obtained for \(v_{s}^{2}(n)\) (see Fig. 6) and thus the models are correctly estimating the distribution of the test set data.
The dataset comparison for \(\eta\left[a,b\right](n)\) (see Eq. 23) is displayed in Fig. 11. Firstly, we observe that \(\eta\left[2,1\right]\) exhibits a behavior similar to \(v_{s}^{2}\). However, it reaches its maximum value earlier at \(n_{3}\). Interestingly, this is precisely where \(R(M)\) and \(y_{p}(n)\) demonstrate the highest correlation, as explained in Annex A. When comparing \(\eta[2,1]\) with \(\eta[4,3]\), the behaviour is consistent with the one observed in the speed of sound, albeit this time is even more outstanding, which comes back again to the correlation, that is almost zero for \(\Lambda(M)\) and \(y_{p}(n)\). Based on this observation, we can speculate that \(\Lambda\) does not contribute with significant information to the model, especially since this time the ratio between \(\eta[4,3]\) and \(\eta[2,1]\) deviates even further from the expected one-quarter proportion, that comes from the change in the input vector, than it did on the \(v_{s}^{2}\). Increasing the model's complexity without adding substantial information can lead to increased confusion and greater uncertainty, as evident in \(\eta[3,2]\) and \(\eta[4,2]\). These two cases exhibit positive values of \(\eta\), indicating higher uncertainty for sets 3 and 4 when compared with set 2.
### BNNs epistemic and aleatoric uncertainties
Let us briefly analyze how the prediction uncertainty of BNNs is modelled and its different components. We are going to focus on the \(y_{p}(n)\) BNN models. However, all the features discussed below are also seen for the \(v_{s}^{2}(n)\) models. We have seen in Eq. 16 of Sec. II that the prediction variance \(\hat{\mathbf{\sigma}}^{2}\) is a combination of two terms,
\[\hat{\mathbf{\sigma}}^{2}=\hat{\mathbf{\sigma}}^{2}_{\text{alea}}+\hat{\mathbf{\sigma}}^ {2}_{\text{epist}},\]
Figure 8: The BNNs predictions for \(y_{p}(n)\) using one EoS of the test. The models trained on datasets 1 (blue) and 2 (orange) are in the upper figure while datasets 3 (purple) and 4 (green) models are in the lower figure. The predicted mean values (solid lines) and \(2\sigma\) confidence intervals are shown. The true values are shown in black dots.
where the aleatoric uncertainty \(\hat{\mathbf{\sigma}}_{\text{dea}}^{2}\) measures the mean variance of the models' ensemble, while the epistemic uncertainty \(\hat{\mathbf{\sigma}}_{\text{epist}}^{2}\) measures the spread of the models around the ensemble mean \(\hat{\mathbf{\mu}}\). The epistemic uncertainty arises from limited information or data and is encoded on the posterior probability \(P(\mathbf{\theta}|D)\), i.e., the model distribution. On the other hand, aleatoric uncertainty is due to the inherent randomness of the dataset and is encoded into the data likelihood \(P(\mathbf{y}^{\star}|\mathbf{x}^{\star},\mathbf{\theta})\). While the epistemic uncertainty decreases when more data is available, the aleatoric uncertainty value does not depend on the amount of data has it is a property of the generating data process.
To analyze the proportions of both uncertainty types on the total prediction variance, we calculate the epistemic percentage as \(f_{\text{epist}}=(\hat{\mathbf{\sigma}}_{\text{epist}}^{2}/\hat{\mathbf{\sigma}}^{2}) \times 100\%\). Using the BNNs models for \(y_{p}(n)\), we show in the left panel of Fig. 12 the mean (dashed lines) and 68% confidence interval region (colored regions) of \(f_{\text{epist}}\) across the entire test (2 and 3) sets. Additionally, the right panel shows \(f_{\text{epist}}\) using BNN models trained on set 1 but with different numbers of mock observations: \(n_{s}=20\) and \(n_{s}=60\) (the value used in this work). The following conclusions can be drawn: i) the prediction variance \(\hat{\mathbf{\sigma}}^{2}\) is composed mainly of aleatoric uncertainty (left panel), around 95%, and it is due to the already high number of mock observations \(n_{s}=60\); ii) the right panel shows that decreasing the number of mock observations \(n_{s}\), and thus the total number of training points, increases the epistemic uncertainty. The epistemic uncertainty converges to zero when the data points go to infinity; iii) left panel also shows that \(f_{\text{epist}}\) is smaller for set 2 (orange) than set 3 (purple) because the input dimensions increase from 10 to 20, which is reflected on the posterior \(P(\mathbf{\theta}|D)\). Lastly, let us argue why the epistemic uncertainty is larger at densities 0.2 - 0.4 fm \({}^{-3}\). When constructing the predicting ensemble, \(P(\mathbf{y}^{\star}|\mathbf{x}^{\star},D)=\frac{1}{N}\sum_{n=1}^{N}P(\mathbf{y}^{\star}| \mathbf{x}^{\star},\mathbf{\theta}^{(n)})\), by sampling from
Figure 11: Prediction uncertainty deviation \(\eta\left[a,b\right]\) between the \(y_{p}(n)\) BNN models \(a\) and \(b\) (see text for details).
Figure 10: Some statistics of \(y_{p}(n)\) calculated from the train dataset.
the variational posterior, \(\mathbf{\theta}^{(n)}\sim q_{\phi}(\mathbf{\theta})\), the density points \(n_{k}\) with larger correlation with \(y_{p}(n)\) are much more sensitive to model sampling than other density points where correlations are much weaker.
## VI Prediction for the DD2 nuclear model
As a final test, we applied the BNN model (trained on set 1, see Tab. 3) to a nuclear model with different properties from the ones used to train, in particular, obtained within a different microscopic description of nuclear matter. We select the DD2 model which is a generalized relativistic mean-field model (RMF) model with density-dependent couplings [69], which has been calibrated to describe properties of finite nuclei. One key difference between the DD2 and the RMF family we used to generate the set of EoS consists of the high density behavior of the symmetry energy. In DD2 model, the coupling to the \(\rho\)-meson that defines the isovector channel of the EoS goes to zero at sufficiently high densities, favoring very asymmetric matter. One of the main consequences is that nucleonic direct Urea processes inside NS are not predicted by DD2 [70; 71]. Another noticeable difference between the DD2 class of models and the class of models used to train the BNN is the behavior of the speed of sound with density: for DD2 like models the speed of sound increases monotonically, although it remains always well below \(c\), while for the class of models used to train BNN the speed of sound flattens or even decreases above \(\sim 3\rho_{0}\). These two differences will be reflected in the performance of the BNN model.
After selecting the DD2 EoS, following the statistical procedure described in Sec. IV, we generated one mock observation (\(n_{s}=1\)) using the dataset 1 properties, which is the one with lower \(\sigma_{R}\) and does not contain the information about \(\Lambda\). The BNN model predictions for the speed of sound (top panel) and proton fraction (lower panel) are shown in Fig. 13. Despite the DD2 lying outside the training values (grey region) for the speed of sound, the model prediction uncertainty extends beyond the training maximum values and almost contains completely the DD2 results. The \(y_{p}(n)\) prediction is quite good, being the prediction of the mean value close to the real one. The DD2 proton fraction reflects the property described above concerning its favoring large neutron-proton asymmetries due to the behavior of the isovector channel. However, our BNN model was able to capture this behavior. Despite the results being quite compelling, there are some crucial points that we would like to point out. During the above test stage, we generated just one mock observation (\(n_{s}=1\)) from the DD2 \(M(R)\) curve to simulate a real case scenario, where a very limited number of NS observations are accessible. Being the sampling procedure of generating one mock observation (\(n_{s}=1\)), i.e., five \(M_{i}(R_{i})\) values, a random process, implies that different samples will originate different predictions (\(v_{s}^{2}(n)\) is much more sensitive than \(y_{p}(n)\), since the DD2 target is completely inside the training values region). This is a somehow expected behaviour since we are trying to characterize the whole \(M(R)\) with only 5 random \((M,R)\) values. While a given sample may be sufficient to inform the general dependence of the \(M(R)\) (like the one we generated), others might be as well almost uninformative of the actual \(M(R)\) curve, i.e., a sample where all five mock observations \(M_{i}(R_{i})\) cluster around the same \(M\) value. This is a general problem that shows up regardless of the inference model or framework: inferring the EoS from a very limited number of NS observations. The BNN performance assessment for the DD2 EoS would get more reliable as the number of points \(M_{i}(R_{i})\), which compose each mock observation (5 in the present work), increase since a random sample, in that case, would be much more informative of the true \(M(R)\) curve.
## VII Conclusions
We have explored Bayesian Neural Networks (BNNs), which is a probabilistic machine learning model, to predict the proton fraction and speed of sound of neutron star matter from a set of NS mock observations. This method is based upon the usual neural networks but with the crucial advantage of attributing an uncertainty measurement to its predictions. Our EoS dataset was generated from a relativistic mean field approach through a Bayesian framework, where constraints from nuclear matter properties and NS observations were applied. The choice of a specific microscopic nuclear model, instead of a more flexible EoS parameterization, as the ones discussed for instance in [72], is justified because we want to analyze the possibility of inferring the neutron star composition, specifically, the proton fraction, from NS observations. From the set of 25287 EoS, four different mock observational sets, simulating four different scenarios of mock observational uncertainties, were generated. Two of them are only composed of \(M(R)\) simulated observations and the other two have also information regarding \(\Lambda(M)\). In the end, 8 different BNNs were trained to predict the \(v_{s}^{2}(n)\) and \(y_{p}(n)\) in each of the 4 datasets.
With this study, we have shown that using BNNs, the measurements of the mass and radius of five neutron stars allow
Figure 12: The pdf of \(f_{\text{pixel}}=(\hat{\mathbf{\sigma}}_{\text{pixel}}^{2}/\hat{\mathbf{\sigma}}^{2}) \times 100\%\) for \(y_{p}(n)\) BNN models in sets 2 and 3 (left) and for set 1 model but trained in different training datasets with different number of mock observations \(n_{s}\) (right).
us to recover information from the equation of state of nuclear matter with associated uncertainty, not only for a quantity that is more connected with the isoscalar behavior of the EoS, the speed of sound, but also for the proton fraction, a property that is determined by the isovector behavior of the EOS. In several recent works, the attempt to determine the proton fraction from the mass radius measurements was unsuccessful [21; 22; 23]. In all these descriptions a polynomial expansion of the EoS until the third or fourth order has been considered. In [23], this was attributed to the existence of multiple solutions. The authors of [22] identify the correlations among higher-order parameters as a difficulty. The BNN approach allows the model to learn the full density dependence of the EoS avoiding the short comes of the density expansion with a finite number of terms. It was shown that the uncertainty associated with the predicted quantities is particularly sensitive to the precision of the observational data if some kind of correlation exists between the data and the property that is being calculated. For the speed of sound this was reflected in a larger sensitivity for densities below three times saturation density, where the NS radius is strongly correlated with the speed of sound as discussed in [65]. It was also shown that adding extra observational mock data, in particular, the tidal deformability, could decrease the uncertainty associated with the prediction, but not always. There was a clear improvement for the speed of sound but not for the proton fraction. Too scattered data does not bring an improvement on the uncertainty determination due to the increase of the complexity of the model compared with the quality of the data. It is important to point out that the improvement attained with the tidal deformability data with a smaller uncertainty worked for the speed of sound because it shows a correlation with the tidal deformability for densities of the order of twice saturation density similar to the one with the radius. This correlation does not exist between the proton fraction and the tidal deformability so no improvement in the proton fraction prediction was attained when introducing the tidal deformability observation. The proton fraction has shown some sensitivity at twice saturation density to the radius uncertainty and this can be traced back to the existing correlation of low mass star radius with the symmetry energy slope [73], a quantity that strongly determines the proton fraction. This correlation weakens quickly with the increase of the NS mass, and is much weaker with the tidal deformability. We have also tested the BNN model with a mock measurement obtained from the DD2 EoS generated with a microscopic framework different from the one used to generate the EoS used to train the BNN model. The results have confirmed the validity of the model and its predicting power.
We have been very conservative concerning the uncertainties attached to the observations. In the future, observatories such as STROBE-X [16] and eXTP [14] may give us radius measurements with uncertainties as small as 2%-5% and this will improve the predictions as demonstrated in the present study.
There are several potential paths for further improvement and exploration in this work. One possibility is to extend the analysis to include other properties of neutron stars and investigate their relationship with observable quantities. Another possibility for improvement as discussed in the results obtained for the DD2 model, is increasing the number of observable pairs used as input can enhance the model's performance. However, it is worth noting that in the case of Bayesian neural networks (BNNs), expanding the number of pairs introduces a greater increase in the model parameters compared to traditional architectures, which is why we used a lesser amount of pairs compared with previous articles using conventional neural networks, as demonstrated in studies like [25] and related articles. Furthermore, for the stochastic model, it would be interesting to improve the prior as mentioned in section II.1.
###### Acknowledgements.
This work was partially supported by national funds from FCT (Fundacao para a Ciencia e a Tecnologia, I.P, Portugal) under Projects No. UIDP/04564/2020, No. UIDB/04564/
Figure 13: The BNN model predictions, \(v_{s}^{2}\) (upper) and \(y_{p}\) (lower), for one mock observation (\(n_{s}=1\)) of the DD2 EoS, the blue area represents the 95% confidence interval, and the solid line the mean.
2020 and 2022.06460.PTDC.
## Appendix A Correlation between NS properties and EoS
Figure 14 shows the Pearson correlation coefficient between \(v_{s}^{2}(n)\) and \(R(M)\) (left panel) and \(\Lambda(M)\) (right panel) for specific NS masses (colors) and the average value, by considering \(M/M_{\odot}\in[1,2.2]\). The same is performed for the proton fraction in Fig. 15. The Pearson correlation is calculated as \(\text{Corr}(a,b)=\text{Cov}(a,b)/(\sigma_{a}\sigma_{b})\), where \(a\) consists of \(v_{s}^{2}\) and \(y_{p}\) and \(b\) of \(R\) and \(\Lambda\). Note, however, that this correlation measure is only sensitive to linear dependencies, and higher order ones can be missed. These correlations have been discussed in [65].
|
2301.12904 | Long Short-Term Memory Neural Network for Temperature Prediction in
Laser Powder Bed Additive Manufacturing | In context of laser powder bed fusion (L-PBF), it is known that the
properties of the final fabricated product highly depend on the temperature
distribution and its gradient over the manufacturing plate. In this paper, we
propose a novel means to predict the temperature gradient distributions during
the printing process by making use of neural networks. This is realized by
employing heat maps produced by an optimized printing protocol simulation and
used for training a specifically tailored recurrent neural network in terms of
a long short-term memory architecture. The aim of this is to avoid extreme and
inhomogeneous temperature distribution that may occur across the plate in the
course of the printing process.
In order to train the neural network, we adopt a well-engineered simulation
and unsupervised learning framework. To maintain a minimized average thermal
gradient across the plate, a cost function is introduced as the core criteria,
which is inspired and optimized by considering the well-known traveling
salesman problem (TSP). As time evolves the unsupervised printing process
governed by TSP produces a history of temperature heat maps that maintain
minimized average thermal gradient.
All in one, we propose an intelligent printing tool that provides control
over the substantial printing process components for L-PBF, i.e.\ optimal
nozzle trajectory deployment as well as online temperature prediction for
controlling printing quality. | Ashkan Mansouri Yarahmadi, Michael Breuß, Carsten Hartmann | 2023-01-30T14:06:14Z | http://arxiv.org/abs/2301.12904v1 | Long Short-Term Memory Neural Network for Temperature Prediction in Laser Powder Bed Additive Manufacturing
###### Abstract
In context of laser powder bed fusion (L-PBF), it is known that the properties of the final fabricated product highly depend on the temperature distribution and its gradient over the manufacturing plate. In this paper, we propose a novel means to predict the temperature gradient distributions during the printing process by making use of neural networks. This is realized by employing heat maps produced by an optimized printing protocol simulation and used for training a specifically tailored recurrent neural network in terms of a long short-term memory architecture. The aim of this is to avoid extreme and inhomogeneous temperature distribution that may occur across the plate in the course of the printing process.
In order to train the neural network, we adopt a well-engineered simulation and unsupervised learning framework. To maintain a minimized average thermal gradient across the plate, a cost function is introduced as the core criteria, which is inspired and optimized by considering the well-known traveling salesman problem (TSP). As time evolves the unsupervised printing process governed by TSP produces a history of temperature heat maps that maintain minimized average thermal gradient.
All in one, we propose an intelligent printing tool that provides control over the substantial printing process components for L-PBF, i.e. optimal nozzle trajectory deployment as well as online temperature prediction for controlling printing quality.
Keywords:Additive manufacturing, laser beam trajectory optimization, powder bed fusion printing, heat simulation, linear-quadratic control
## 1 Introduction
In contrast to traditional machining, additive manufacturing (AM) builds objects layer by layer through a joining process of materials making the fabrication of individualized components possible across different engineering fields. The laser powder bed fusion (L-PBF) technique as an AM process, that we focus
on in this study, uses a deposited powder bed which is selectivity fused by a computer-controlled laser beam [17]. The extreme heating by the laser on the one hand, and on the other hand the influence of the degree of homogeneity of the heat distribution on the printing quality in L-PBF, make it highly challenging to conduct the printing process in an intelligent way that may guarantee high quality printing results. As explained in more detail when discussing related work, there has thus been a continuous effort to _(i)_ propose beneficial printing paths that help to avoid unbalanced heating and _(ii)_ to forecast the heat distribution in order assess the potential printing quality and terminate printing in case of foreseeable flaws.
In this paper, we propose to couple a laser beam trajectory devised on the basis of a heuristic control during the fabrication phase of L-PBF with prediction based on neural networks. The developed novel framework addresses both the abovementioned main issues in L-PBF and represents an intelligent printing tool that provides control over the printing process. To this end, we aim at conducting controlled laser beam simulation that approximately achieves _temperature constancy_ on a simulated melted power bed. In addition, we opt to perform temperature rate of change prediction as an important factor for microscopic structure of the final fabricated product.
The main novelty of the current paper is to adopt _long-short-term memory_ (LSTM) [8] prediction framework, which is introduced in Section 4 to predict the temperature distribution and its gradient during printing. This consequently can be used to avoid any overheating by taking necessary actions in advance, namely stopping the printing process to avoid the printer damage due to overheated deformed parts of the printing product. Based on this, we conjecture that our developed pipeline may provide a highly valuable step for practical printing that provides quality control of the printed product, while being efficient with regard to energy consumption and use of material. Finally, in Section 5, we present an effective numerical test concerning the predicted temperature gradients.
In Section 3 of this paper a simulation framework is brought out by recalling the heat transfer model together with a cost function that consists of two terms aiming to maintain almost a constant temperature with a low spatial gradient across the power bed area. For simplicity, we confine ourselves to a 2-dimensional domain, which is still a realistic description of printing over the manufacturing plate. In Subsection 3.2, the idea of the travelling salesman problem (TSP) as a heuristics for the laser beam steering is explained; being one of the most fundamental and well-studied NP-hard problems in the field of combinatorial optimization (e.g. [4, 7]), we will use a stochastic optimization strategy (simulated annealing) to establish an optimal laser trajectory.
## 2 Related work in Laser Powder Bed Additive Manufacturing
In general, a variety of different laser beam parameters such as laser power, scan speed, building direction and laser thickness influence the final properties
of the fabricated product. Due to intensive power of laser during additive manufacturing, the printed product can have defects, such as deviations from the target geometry or cracks caused by large temperature gradients. For example, inhomogeneous heating may lead to unmelted powder particles that can locally induce pores and microscopic cracks [6]. At the same time, the cooling process determines the microstructure of the printed workpiece and thus its material properties, such as strength or toughness, which depend on the proportion of carbon embedded in the crystal structure of the material [1].
In a broader view, machine learning approaches may be deployed to provide monitoring capabilities over varying factors of L-PBF, namely the used metal powder and its properties both at the initial time of spread and during the printing process as well as the laser beam parameters, aiming to investigate and avoid any defect generation during the fabrication. See [2] for an survey.
Concerning the powder properties, different capturing technologies along with machine learning tools are used to automate the task of defect detection and avoidance during the printing process. In [19, 20], the k-means clustering [14] and convolution neural network (CNN) [12] respectively, were used to detect and classify defects at the time of initial powder spread and their probable consequences during the entire printing phase and based on captured grey images. In [11], high resolution temporal and evolving patterns are captured using a commercial EOS M270 system to find layer-wise heat inhomogeneities induced by the laser. In [10], an inline coherent imaging (ICI) system was used to monitor the defects and unstable process regimes concerning the morphology changes and also the stability of the melt pools. Here, the back scattered intensities from the melt pool samples are measured as a function of their heights called A-lines. Later, a Gaussian fitting of individual A-lines is performed to determine centroid height and amplitude of melt pools as a function of time corresponding to a range of different stainless steel powders with different properties.
About the laser beam and its parameter optimization task one can avoid conducting expensive real experiments, in terms of material and power usage, by simulating the printing process by means of finite element method [5] (FEM), Lattice Boltzmann method (LBM) or finite volume method (FVM) See [3, 18] for extensive surveys. Later the gathered simulated data may be used in a data-driven machine learning approach within a L-PBF framework. In this context, a prediction task of thermal history was performed in [13] by adopting a recurrent neural network (RNN) structure with a Gated Recurrent Unit (GRU) in a L-PBF process. A range of different geometries are simulated by FEM while accounting for different laser movement strategies, laser power and scan speed. A three-dimensional FEM is adopted in [24] to simulate the laser beam trajectory and investigate its effects on the residual stresses of the parts. The simulation results show modifications of the residual stress distributions and their magnitudes, that was validated through experimental tests, as a result of varying laser beam trajectory type. A parametric study [25] used the same FEM simulation setup as [24] with three varying factors namely the laser beam speed, the layer thickness and the laser deposition path width. While each factor value varies in its range
from low, medium to high the hidden relations among the factors and their affects on residual stresses and part distortions are revealed.
In context of FEM simulation with a steering source of heat to represent the laser movement, one can refer to the work developed in [21]. Here, the residual stresses during the printing is predicted though the laser nozzle steering rule is not revealed.
## 3 Heat transfer model and TSP formulation
As indicated, we first describe our heat simulation setting which is the framework for the TSP optimization protocol described in the second part of this section.
### Heat simulation framework
We set up a simulation environment, namely _(i)_ a moving source of heat (cf. (3)) to act as a laser beam on _(ii)_ an area \(\Omega\subset\mathbb{R}^{2}\) simulated as deposition of aluminium metal powder called a plate. We assume that the plate is mounted to a base plate with large thermal conductivity, which makes the choice of Dirichlet boundary conditions with constant boundary temperature appropriate; if the surrounding is an insulator, then a reflecting i.e. zero-flux or von Neuman boundary condition is more suitable. A sequence of laser beam movements, called a trajectory, is followed so that at each point the heat equation (1) is resolved based on FEM providing us a temperature map that varies on different plate locations as the time evolves.
Letting \(u\) be the temperature across an open subset \(\Omega\subset\mathbb{R}^{2}\) as time \(t\) evolves in \(\in[0,T]\), the heat equation that governs the time evolution of \(u\) reads
\[\frac{\partial}{\partial t}u(x,y,t) =\alpha\nabla^{2}u(x,y,t)+\beta I(x,y)\,, (x,y,t)\in\Omega^{\circ}\times(0,T) \tag{1a}\] \[u(x,y,t) =\theta_{0} (x,y,t)\in\partial\Omega\times[0,T]\] (1b) \[u(x,y,0) =u_{0}(x,y) (x,y)\in\Omega \tag{1c}\]
where we denote by
\[\nabla^{2}\phi=\frac{\partial^{2}\phi}{\partial x^{2}}+\frac{\partial^{2}\phi }{\partial y^{2}} \tag{2}\]
the Laplacian of some function \(\phi\in C^{2}\), by \(\Omega^{\circ}\) we denote the interior of the domain \(\Omega\), and by \(\partial\Omega\) its piecewise smooth boundary; here \(u_{0}\) is some initial heat distribution, \(\theta_{0}\) is the constant ambient space temperature (\(20^{\circ}C\)), and we use the shorthands
\[\alpha\coloneqq\frac{\kappa}{c\rho}\quad\text{ and }\quad\beta\coloneqq\frac{1}{c\rho}\]
with \(\kappa\), \(c\) and \(\rho\) all in \(\mathbb{R}^{+}\), denoting thermal conductivity, specific heat capacity and mass density. Our power density distribution of choice to simulate a laser
beam is a Gaussian function :
\[I\left(x,y\right)=I_{0}\cdot\exp\!\left\{\left[-2\left[\left(\frac{x-x_{c}}{ \omega}\right)^{2}+\left(\frac{y-y_{c}}{\omega}\right)^{2}\right]\right]\right\} \tag{3}\]
with an intensity constant
\[I_{0}=\frac{2P}{\pi\omega^{2}} \tag{4}\]
by knowing \(\omega\) and \(P\) to be the radius of the Gaussian beam waist and the laser power, respectively. In our study we let \(x\in[-1,+1]\), \(y\in[-1,+1]\) with \((x,y)\in\Omega^{\circ}\), \(t\in\mathbb{R}^{+}\) and also \(u\left(x,y,0\right)=0\). The aluminium thermal properties are used to simulate the metal powder spread across the manufacturing plate.
We solved (1) using [15] by setting \(P=4200\left(\mathrm{W}\right)\) and \(\omega=35\) pixels while letting \((x_{c},y_{c})\) to take all possible trajectory points such that the domain \(\Omega^{\circ}\) always be affected by five consecutive heat source moves. In this way, we simulated the heat source movement across the board.
#### 3.1.1 Control objective.
By adoption of the TSP based protocol, we aim to minimize the value of a desired objective function :
\[J(m)=\frac{1}{2}\int_{0}^{T}\left(\int_{\Omega}|\nabla u_{m}(z,t)|^{2}+\left(u _{m}(z,t)-u_{g}\right)^{2}\mathrm{d}z\right)\mathrm{d}t \tag{5}\]
with
\[u_{m}=u_{m}(z,t)\,,\quad z=(x,y)\in\Omega^{\circ},\,t\in[0,T] \tag{6}\]
being the solution of the heat equation (1) on the interval \([0,T]\) under the control
\[m\colon[0,T]\to\Omega^{\circ}\,,\quad t\mapsto(x_{c}(t),y_{c}(t)) \tag{7}\]
that describes the trajectory of the center of the laser beam. Moreover, we have introduced \(u_{g}\) as the desired target temperature to be maintained over the domain \(\Omega^{\circ}\) as time \(t\) evolves.
The motivation behind (5) is to maintain a smooth temperature gradient over the entire plate for all \(t\in[0,T]\) as is achieved by minimizing the \(L^{2}\)-norm of the gradient, \(\nabla u\), while at the same time keeping the (average) temperature near a desired temperature \(u_{g}\) for any time \(t\).
We proceed, by dividing the entire plate \(\Omega^{\circ}\) into \(4\times 4\) sub-domains (see Fig. 1) and investigate our objective function (5) within each sub-domain as explained in Section 5.
### TSP-based Formulation
A common assumption among numerous variants of the TSP [4], as an NP-hard problem [7], is that a set of cities have to be visited on a shortest possible _tour_. Let \(\mathcal{C}_{n\times n}\) be a symmetric matrix specifying the distances corresponding to the paths connecting a vertex set \(\mathcal{V}=\{1,2,3,\cdots,n\}\) of cities to each other with
\(n\in\mathbb{N}\) to be the number of cities. A tour over the complete undirected graph \(\mathcal{G}\left(\mathcal{V},\mathcal{C}\right)\) is defined as a cycle passing through each vertex exactly once. The traveling salesman problem seeks a tour of minimum distance.
To adopt the TSP into our context, we formulate its input \(\mathcal{V}\) as the set of all \(16\times 16\) stopping points of the heat source over the board \(\Omega^{\circ}\), and the set \(\mathcal{C}\) as a penalty matrix with each element \(\mathcal{C}_{ij}\geq 0\) being the impact (i.e. cost) of moving the heat source from a \(i\in\mathcal{V}\) to \(j\in\mathcal{V}\). For every vertex \(i\in\mathcal{V}\), the possible movements to all \(j\neq i\) with the associated cost (5) is computed and assigned to \(\mathcal{C}_{ij}\) (see below for details). With this formulation, we want to remind the reader that \(\mathcal{C}\) elements are nonnegative and follow the triangle inequality :
\[\mathcal{C}_{ij}\leq\mathcal{C}_{ik}+\mathcal{C}_{kj} \tag{8}\]
with \(i,j,k\in\mathcal{V}\).
Note that, \(\mathcal{C}\) matrix is obtained based on a prior set of temperature maps produced using FEM without enforcing any particular protocol on them.
With this general formulation at hand, let us have a closer look at the discretized form of (5) that was used in current study to compute the elements of the penalty matrix:
\[\mathcal{C}_{ij}=\left|\sum_{l=1}^{4\times 4}\left(\left\|\boldsymbol{\Psi} \left(i,l\right)\right\|^{2}+\left(\boldsymbol{\Lambda}\left(i,l\right)-u_{g} \right)^{2}\right)-\sum_{l=1}^{4\times 4}\left(\left\|\boldsymbol{\Psi}\left(j,l \right)\right\|^{2}+\left(\boldsymbol{\Lambda}\left(j,l\right)-u_{g}\right)^{2 }\right)\right| \tag{9}\]
with \(l\) to be the sub-domain index. In addition, \(\boldsymbol{\Psi}\left(\cdot,l\right)=\sum_{z\in\Omega_{l}}\sum_{t\in t_{l}} \nabla u_{m}\left(z,t\right)\) represents the temperature gradient aggregation within each sub-domain, and
Figure 1: We divide the entire domain \(\Omega^{\circ}\) containing the diffused temperature values into \(4\times 4\) sub-domains separated by white lines. Within each sub-domain, (5) is computed to reveal how the temperature gradient \(\left|\nabla u_{m}(\cdot,t)\right|\) evolves as a function of time \(t\) evolves and how the average temperature \(\bar{u}\) is maintained near to a target value of \(u_{g}\). Note, the laser beam positions are in this image irrelevant.
\(\boldsymbol{\Lambda}\left(\cdot,l\right)=\frac{1}{|\Omega_{l}|}\sum_{z\in\Omega_{l}} \sum_{t\in t_{l}}u_{m}\left(z,t\right)\) is the average temperature value of each sub-domain, with \(t_{l}\) to be the time period on which the nozzle operates on \(\Omega_{l}\). Here, by \(|\Omega_{l}|\), we mean the number of discrete points in \(\Omega_{l}\subset\Omega^{\circ}\). In other words, (9) is the TSP cost of moving the nozzle from the \(i^{\text{th}}\) to the \(j^{\text{th}}\) stopping point that depends on (a) the mean square deviation of the temperature field from constancy and (b) on the mean square deviation from the global target temperature \(u_{g}\). In our simulation, the nozzle moves in the direction of the shortest (Euclidean) path connecting two successive stopping points. Thereby we assume the nozzle always adjusts its velocity so that the path between any arbitrarily chosen stopping points \(i\) and \(j\) always takes the same amount of time. The motivation behind this is to avoid heating up the entire domain \(\Omega^{\circ}\) as a result of keeping the nozzle velocity constant.
In practice no polynomial-time algorithm is known for solving the TSP [7], so we adopt a _simulated annealing algorithm_[23] that was first proposed in statistical physics as a means of determining the properties of metallic alloys at a given temperatures [16]. In the TSP context, we adopt [23] to look for a good (but in general, sub-optimal) tour corresponding to the movement of the heat source leading to the minimization of (9).
In Section 5, we reveal our prediction results obtained by adopting a TSP based heuristic along with the LSTM network. Before moving to the next section, let us observe a subset of temperature maps obtained based on TSP shown as Fig. 2.
Figure 2: A subset of heat maps produced by FEM as the solution to the heat equation (1). One clearly observes the effect of the previous laser positions on current status of the map, in terms of diffused temperature. The TSP as a heuristics steers the heat source across the plate aiming to keep temperature constancy. Note that all temperatures are in Celsius.
## 4 The LSTM Approach
Let us start discussion of our deep learning framework structure by investigating its LSTM [8] cell building blocks shown as Fig. 2(a) used to comprise a stack of three LSTM layers (see Fig. 2(b)) followed by a fully connected layer.
Here, we use temperature gradient values of \(\mu=14\) previous (i.e. from previous time) heat maps to predict the gradients values of the current heat map. By letting \(\zeta\) to be the current heat map, its history feature values formally lie in a range of \([\zeta-\mu,\zeta-1]\) heat maps with \(\zeta>\mu\). By considering each heat map to have 16 sub-domains and the same number of gradient features \(\Psi\left(\cdot,l\right)\), each corresponding to one sub-domain, we obtain in total \(\nu=\mu\times 16\) number of gradient feature history values that we vectorise to frame the vector \(\mathcal{X}\in\mathbb{R}^{\nu}\). Our aim is to use sub-sequences from \(\mathcal{X}\) to train the stacked of LSTMs and forecast a sequence of 16 number of gradient feature values corresponding to the sub-domains of a heat map of interest \(\zeta\).
Let us briefly discuss the weight and bias matrix dimensions of each LSTM cell. Here, we use \(q\in\mathbb{N}\) as the number of hidden units of each LSTM cell and \(n\in\mathbb{N}\) to represent the number of features that we obtain from FEM based heat maps and fed to the LSTM cell. More specifically, we have only one feature \(\Psi\left(\cdot,l\right)\) per sub-domain, i.e. \(n=1\). In practice, during the training process and at a particular time \(t^{\prime}\), a batch of input feature values \(\mathcal{X}\supset\mathcal{X}^{\left\langle t^{\prime}\right\rangle}\in \mathbb{R}^{b\times n}\) with \(b\in\mathbb{N}\) to be the batch size, are fed to each LSTM cells of the lowest stack level in Fig. 2(b). Here LSTM learns to map each feature value in \(\mathcal{X}^{\left\langle t^{\prime}\right\rangle}\) to its next adjacent value in \(\mathcal{X}\) as its label. The mapping labels are applied during the training and to the only neuron \(\mathcal{R}_{\eta}\in\mathbb{R}\) of the last fully connected layer with \(\eta=1\).
In addition to \(\mathcal{X}^{\left\langle t^{\prime}\right\rangle}\), each LSTM cell accepts two others inputs, namely \(h^{\left\langle t^{\prime}-1\right\rangle}\in\mathbb{R}^{b\times q}\) and \(c^{\left\langle t^{\prime}-1\right\rangle}\in\mathbb{R}^{b\times q}\), the so called the _hidden state_ and _cell state_ both of which are already computed at the time \(t^{\prime}-1\). Here, the cell state \(c^{\left\langle t^{\prime}-1\right\rangle}\) carries information from the intervals prior to \(t^{\prime}\).
A few remarks on how the cell state \(c^{\left\langle t^{\prime}\right\rangle}\) (10) at time \(t^{\prime}\) is computed by the formula (10) below are in order: A more precise look at (10) reveals that it partially depends on \(\Gamma_{f}\odot c^{\left\langle t^{\prime}-1\right\rangle}\) with \(c^{\left\langle t^{\prime}-1\right\rangle}\) being the cell state at the previous time \(t^{\prime}-1\). The term \(\Gamma_{f}\) satisfies
\[c^{\left\langle t^{\prime}\right\rangle}=\Gamma_{u}\odot\tilde{c}^{\left\langle t ^{\prime}\right\rangle}+\Gamma_{f}\odot c^{\left\langle t^{\prime}-1\right\rangle} \tag{10}\]
with \(\odot\) representing the element-wise vector multiplication. As (10) further shows, \(c^{\left\langle t^{\prime}\right\rangle}\) also depends on \(\tilde{c}^{\left\langle t^{\prime}\right\rangle}\) that itself is computed based on the feature vector \(\mathcal{X}^{\left\langle t^{\prime}\right\rangle}\) and the previous hidden state \(h^{\left\langle t^{\prime}-1\right\rangle}\) as:
\[\tilde{c}^{\left\langle t^{\prime}\right\rangle}=\tanh\left(\left[\left(h^{ \left\langle t^{\prime}-1\right\rangle}\right)_{b\times q}\right]\left( \mathcal{X}^{\left\langle t^{\prime}\right\rangle}\right)_{b\times n}\right] \times\mathcal{W}_{c}+\left(b_{c}\right)_{b\times q}\right) \tag{11}\]
with \(\times\) visualizing in this work standard matrix multiplication, and \(b_{c}\) and \(\mathcal{W}_{c}\) to be the corresponding bias and weight matrices, respectively.
Equation (10) contains two further terms, \(\Gamma_{u}\) and \(\Gamma_{f}\), called the _update gate_ and _forget gate_ defined as
\[\Gamma_{u}=\sigma\Bigg{(}\bigg{[}\Big{(}h^{\langle t^{\prime}-1\rangle}\Big{)}_{ b\times q}\bigg{]}\left(\mathcal{X}^{\langle t^{\prime}\rangle}\right)_{b \times n}\bigg{]}\times\mathcal{W}_{u}+\left(b_{u}\right)_{b\times q}\Bigg{)} \tag{12}\]
and
\[\Gamma_{f}=\sigma\Bigg{(}\bigg{[}\Big{(}h^{\langle t^{\prime}-1\rangle}\Big{)} _{b\times q}\bigg{]}\left(\mathcal{X}^{\langle t^{\prime}\rangle}\right)_{b \times n}\bigg{]}\times\mathcal{W}_{f}+\left(b_{f}\right)_{b\times q}\Bigg{)} \tag{13}\]
that are again based on the feature vector \(\mathcal{X}^{\langle t^{\prime}\rangle}\) and the previous hidden state \(h^{\langle t^{\prime}-1\rangle}\) with \(b_{u}\), \(b_{f}\), \(\mathcal{W}_{u}\) and \(\mathcal{W}_{f}\) to be the corresponding biases and weight matrices. Let us shortly conclude here, that the feature vector \(\mathcal{X}^{\langle t^{\prime}\rangle}\) and the previous hidden state \(h^{\langle t^{\prime}-1\rangle}\) are the essential ingredients used to compute \(\tilde{c}^{\langle t^{\prime}\rangle}\), \(\Gamma_{u}\) and \(\Gamma_{f}\), that are all used to update the current cell state \(c^{\langle t^{\prime}\rangle}\) (10).
The motivation to use the _sigmoid function_\(\sigma\) in structure of the gates shown in (12) and (13) is its activated range in \([0,1]\), leading them in extreme cases to
Figure 3: (a) A graphical representation of the LSTM cell accepting the hidden state \(h^{\langle t^{\prime}-1\rangle}\) and the cell state \(c^{\langle t^{\prime}-1\rangle}\) from the previous LSTM and the feature vector \(\mathcal{X}^{\langle t^{\prime}\rangle}\) at the current time. (b) A schematic representation of the adopted stack of LSTMs comprised of three recurrent layers processing the data. The upper LSTM layer is followed by a fully connected layer. The network performs a regression task being trained based on half-mean-square-error loss function (16). Note, the fully connected layer is established between the output of LSTM stack \(h^{\langle t^{\prime}\rangle}_{3}\) and the only neuron of the last layer \(\mathcal{R}^{\langle t^{\prime}\rangle}_{\eta}\) with \(\eta=1\).
be fully on or off letting all or nothing to pass through them. In non-extreme cases they partially contribute the previous cell state \(c^{\langle t^{\prime}-1\rangle}\) and the on the fly computed value \(\tilde{c}^{\langle t^{\prime}\rangle}\) to the current cell state \(c^{\langle t^{\prime}\rangle}\) as shown in (10).
To give a bigger picture, let us visualize the role of the \(\Gamma_{u}\) and \(\Gamma_{f}\) gates concerning the cell state \(c^{\langle t^{\prime}\rangle}\). In Fig. 3a, a direct line connecting \(c^{\langle t^{\prime}-1\rangle}\) to \(c^{\langle t^{\prime}\rangle}\) carries the old data directly from time \(t^{\prime}-1\to t^{\prime}\). Here, one clearly observes the \(\Gamma_{u}\) and \(\Gamma_{f}\) gates both are connected by \(+\) and \(\times\) operators to the passed line. They linearly contribute, as shown in (12) and (13), the current feature value \(\mathcal{X}^{\langle t^{\prime}\rangle}\) and the adjacent hidden state \(h^{\langle t^{\prime}-1\rangle}\) to update the current cell state \(c^{\langle t^{\prime}\rangle}\). Meanwhile, \(\Gamma_{u}\) shares its contribution through \(\times\) operator with \(\tilde{c}^{\langle t^{\prime}\rangle}\) to the passing line.
Finally, to make the current LSTM activated we need the cell state value at the time \(t^{\prime}\), namely \(c^{\langle t^{\prime}\rangle}\), that we obtain from (10) and also the so called _output gate_ obtained from
\[\Gamma_{o}=\sigma\Bigg{(}\bigg{[}\Big{(}h^{\langle t^{\prime}-1\rangle}\Big{)} _{b\times q}\bigg{|}\Big{(}\mathcal{X}^{\langle t^{\prime}\rangle}\Big{)}_{b \times n}\bigg{]}\times\mathcal{W}_{o}+\left(b_{o}\right)_{b\times q}\Bigg{)} \tag{14}\]
with \(\Gamma_{o}\in[0,1]\), and \(b_{o}\) and \(\mathcal{W}_{o}\) to be the corresponding bias and weight matrices. The final activated value of the LSTM cell is computed by
\[h^{\langle t^{\prime}\rangle}=\Gamma_{o}\odot\tanh\Big{(}c^{\langle t^{\prime }\rangle}\Big{)}. \tag{15}\]
Here, the obtained activated value \(h^{\langle t^{\prime}\rangle}\) from (15) will be used as the input hidden state to the next LSTM cell at the time \(t^{\prime}+1\).
Let us also mention that all the biases \(b_{c},b_{u},b_{f},b_{o}\in\mathbb{R}^{b\times q}\) and the weight matrices are further defined as
\[\mathcal{W}_{c}\coloneqq\Big{[}\left(\mathcal{W}_{ch}\right)_{q \times q}\!\!\big{|}\!\left(\mathcal{W}_{cx}\right)_{n\times q}\Big{]}^{\top}, \mathcal{W}_{u}\coloneqq\Big{[}\left(\mathcal{W}_{uh}\right)_{q \times q}\!\!\big{|}\!\left(\mathcal{W}_{ux}\right)_{n\times q}\Big{]}^{\top}\] \[\mathcal{W}_{f}\coloneqq\Big{[}\left(\mathcal{W}_{fh}\right)_{q \times q}\!\!\big{|}\!\left(\mathcal{W}_{fx}\right)_{n\times q}\Big{]}^{\top}, \mathcal{W}_{o}\coloneqq\Big{[}\left(\mathcal{W}_{oh}\right)_{q \times q}\!\!\big{|}\!\left(\mathcal{W}_{ox}\right)_{n\times q}\Big{]}^{\top}\]
leading both the \(\tilde{c}^{\langle t^{\prime}\rangle},c^{\langle t^{\prime}\rangle}\in \mathbb{R}^{b\times q}\).
Finally, we have a fully connected layer that maps the output \(h^{<:>}_{b\times q}\) of the stacked LSTM to the only neuron of the output layer \(\mathcal{R}_{\eta}\). This is achieved during the training process while the weight matrix \(\hat{\mathcal{W}}\in\mathbb{R}^{q\times\eta}\) and bias vector \(\hat{b}\in\mathbb{R}^{b\times\eta}\) corresponding to the fully connected layer are updated based on a computed loss \(\mathcal{L}\) using half-mean-square-error (16) between the network predictions and target temperature gradient values obtained from the heat maps produced by FEM.
\[\mathcal{L}=\frac{1}{2\eta b}\sum_{i_{1}=1}^{b}\sum_{i_{2}=1}^{\eta}\left(p_{i _{1}i_{2}}-y_{i_{1}i_{2}}\right)^{2} \tag{16}\]
Here, \(p\) and \(y\) values represent the predicted and the target gradient temperature values, respectively.
## 5 Results
To begin with, we consider a set of computed root-mean-square-error measures (RMSE) between the predicted and the target gradient values corresponding to the nozzle moves as shown in Fig. 4. More precisely, each curve value represents a computed RMSE between all \(4\times 4\) sub-domains gradient feature values of predicted heat map \(\zeta\) and their ground truth counterpart. Since we use a history of \(\mu=14\) previous gradient heat maps, the first prediction can be performed for the \(15^{\text{th}}\) nozzle move. Among all the measured RMSE values, we highlight four of them as can be seen in Fig. 4, that correspond to the \(25^{\text{th}}\), \(50^{\text{th}}\), \(75^{\text{th}}\) and \(100^{\text{th}}\) percentiles.
As one observes in Fig. 4, a relatively low RMSE measure is obtained almost across all nozzle moves on horizontal axis, though there exist some outliers. We further visualize the corresponding prediction results of the percentiles as Fig. 5. Specifically, let us take the \(25^{\text{th}}\) RMSE percentile computed between the black and its overlapping part of the pink curve shown in Fig. 4(a). The black curve in Fig. 4(a) is comprised of \(4\times 4\) forecasted vectorized gradient feature values of heat map sub-domains produced by \(54^{\text{th}}\) nozzle move with a RMSE equal to \(0.009\), compared to its overlapping pink curve. In this case, we let the \(i\) as the nozzle move number to range in \([\zeta-\mu,\zeta-1]\) to produce a history of feature gradient values corresponding to the heat map with \(\zeta=54\). This consequently means, the non-overlapping part of the pink curve in Fig. 4(a) represent the vectorized
Figure 4: Each curve value represents a computed RMSE between all \(4\times 4\) sub-domains gradient feature values of predicted heat map \(\zeta\) and their ground truth counterpart. The RMSE computation can be started from \(15^{\text{th}}\) nozzle move onward, since we use a history of \(\mu=14\) previous gradient maps. Those RMSE measures, highlighted as \(\times\) in ascending order correspond to the \(25^{\text{th}}\), \(50^{\text{th}}\), \(75^{\text{th}}\) and \(100^{\text{th}}\) percentiles, respectively.
history feature values of \(40^{\text{th}}\) to \(53^{\text{th}}\) heat maps that comprise \(\mu=14\) number of preceding heat maps of \(\zeta=54\), each of them with \(4\times 4\) sub-domains. The black curves in Figs. 5b, 5c and 5d are also comprised of the predicted gradient feature values of the heat maps \(\zeta\) equal to \(129\), \(220\) and \(244\), respectively, that are forecasted based on \(\mu\) number of their previous heat maps.
A closer look at the four prediction samples shown in Figs. 5 reveals that the even the \(100^{\text{th}}\) percentile, that marks some kind of outlier, is accurately predicted in that the shape of the black curve tracks the pink curve (ground truth). Concerning other three RMSE percentile values, the synchronicity among the black and its pink curves is preserved equally well though in some part we do not have a full overlap.
Finally, the parameters used during the training phase are revealed to be the Adam optimizer [9] applied on batch data of size \(6\). The epoch value is chosen to be \(350\) that results to a meaningful reduction of RMSE and loss measures within each batch. The initial learning rate was also chosen to be \(0.008\) with a drop factor of \(0.99\) concerning each \(12\) epochs. To avoid the overfitting phenomenon, the flow of data within the network structure is randomly adjusted by setting the LSTMs outputs with a probability of \(0.25\) to zero [22].
## 6 Conclusion
We developed a novel and practical pipeline and mathematically justified its comprising components. Our proposed model consists of two major components, namely the simulation part of a laser power bed fusion setup based on FEM and an intelligent agent based on LSTM network that actively judges the simulation results based on a proposed cost function. The FEM simulation can be robustly applied before conducting expensive real-world printing scenarios so that the intelligent component of the pipeline can decide on early stopping of the printing process. The LSTM based network predicts the forthcoming temperature rate of change across the simulated power bed based on previously seen temperature history leading us to have a means of control to achieve a final optimal printing process as visualized by our obtained results.
## Acknowledgements
The current work was supported by the European Regional Development Fund, EFRE 85037495.
|
2305.08273 | Decoupled Graph Neural Networks for Large Dynamic Graphs | Real-world graphs, such as social networks, financial transactions, and
recommendation systems, often demonstrate dynamic behavior. This phenomenon,
known as graph stream, involves the dynamic changes of nodes and the emergence
and disappearance of edges. To effectively capture both the structural and
temporal aspects of these dynamic graphs, dynamic graph neural networks have
been developed. However, existing methods are usually tailored to process
either continuous-time or discrete-time dynamic graphs, and cannot be
generalized from one to the other. In this paper, we propose a decoupled graph
neural network for large dynamic graphs, including a unified dynamic
propagation that supports efficient computation for both continuous and
discrete dynamic graphs. Since graph structure-related computations are only
performed during the propagation process, the prediction process for the
downstream task can be trained separately without expensive graph computations,
and therefore any sequence model can be plugged-in and used. As a result, our
algorithm achieves exceptional scalability and expressiveness. We evaluate our
algorithm on seven real-world datasets of both continuous-time and
discrete-time dynamic graphs. The experimental results demonstrate that our
algorithm achieves state-of-the-art performance in both kinds of dynamic
graphs. Most notably, the scalability of our algorithm is well illustrated by
its successful application to large graphs with up to over a billion temporal
edges and over a hundred million nodes. | Yanping Zheng, Zhewei Wei, Jiajun Liu | 2023-05-14T23:00:10Z | http://arxiv.org/abs/2305.08273v1 | # Decoupled Graph Neural Networks for Large Dynamic Graphs
###### Abstract.
Real-world graphs, such as social networks, financial transactions, and recommendation systems, often demonstrate dynamic behavior. This phenomenon, known as graph stream, involves the dynamic changes of nodes and the emergence and disappearance of edges. To effectively capture both the structural and temporal aspects of these dynamic graphs, dynamic graph neural networks have been developed. However, existing methods are usually tailored to process either continuous-time or discrete-time dynamic graphs, and cannot be generalized from one to the other. In this paper, we propose a decoupled graph neural network for large dynamic graphs, including a unified dynamic propagation that supports efficient computation for both continuous and discrete dynamic graphs. Since graph structure-related computations are only performed during the propagation process, the prediction process for the downstream task can be trained separately without expensive graph computations, and therefore any sequence model can be plugged-in and used. As a result, our algorithm achieves exceptional scalability and expressiveness. We evaluate our algorithm on seven real-world datasets of both continuous-time and discrete-time dynamic graphs. The experimental results demonstrate that our algorithm achieves state-of-the-art performance in both kinds of dynamic graphs. Most notably, the scalability of our algorithm is well illustrated by its successful application to large graphs with up to over a billion temporal edges and over a hundred million nodes.
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
FootnoteFootnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
FootnoteFootnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
FootnoteFootnote †:
+
Footnote †:
+
FootnoteFootnote †:
+
Footnote †:
+
FootnoteFootnote †:
+
Footnote †:
+
FootnoteFootnote †:
+
FootnoteFootnote †:
+
FootnoteFootnote †:
+
Footnote †:
+
Footnote
generic propagation methods for both continuous-time and discrete-time dynamic graphs.
We observe that CTDG methods, such as TGN (TGN et al., 2017), keep track of nodes affected by each graph event and adjust their embeddings, avoiding relearning the embeddings of all nodes and conserving computing resources. Since each snapshot is treated as a static graph in DTDG methods, edge deletion and the simultaneous occurrence of multiple graph events are naturally handled. Our objective is to develop a novel dynamic graph neural network that combines the strengths of both CTDG and DTDG methods. To achieve this, we introduce incremental node embedding update strategies specifically designed for handling batch graph events. This allows our model to process batch events similar to DTDG methods, while also keeping track of embedding changes akin to CTDG methods. Notably, our update strategy is not limited to adding new edges but also works seamlessly for removing edges. The main contributions can be summarized as follows:
* We propose a decoupled graph neural network for large dynamic graphs, which decouples the temporal propagation and prediction processes on dynamic graphs, enabling us to achieve great scalability and generate effective representation.
* We support the processing of continuous-time and discrete-time dynamic graphs by designing the generalized dynamic feature propagation. On the other hand, the model can fit various high-pass or low-pass graph filters to obtain a comprehensive temporal representation, by configuring various propagation formulas.
* Extensive experiments on seven benchmark datasets demonstrate the effectiveness of our method. Experimental results show that our model outperforms existing state-of-the-art methods. In addition, we evaluate our method on two large-scale graphs to show its excellent scalability.
## 2. Notations and Preliminary
In this section, we first introduce the necessary notations. Then we provide a concise overview of the classification of dynamic graphs and the common learning tasks associated with them.
**Notations.** A static graph is denoted as \(G=(V,E)\), where \(V\) is the set of \(n\) nodes, and \(E\) represents the set of \(m\) edges. Let \(\mathbf{A}\in\mathbb{R}^{n\times n}\) represent the adjacency matrix of \(G\), with entry \(\mathbf{A}(i,j)=w_{(i,j)}>0\) being the weight of the edge between node \(i\) and \(j\), and \(\mathbf{A}(i,j)=w_{(i,j)}=0\) indicates non-adjacency. The degree matrix \(\mathbf{D}\in\mathbb{R}^{n\times n}\) is a diagonal matrix defined by \(\mathbf{D}(i,i)=d(i)=\sum_{j\in V}w_{(i,j)}\). Each node \(i\in V\) has a \(d\)-dimensional features vector \(\mathbf{x}_{i}\), and all feature vectors form the feature matrix \(\mathbf{X}\in\mathbb{R}^{n\times d}\).
**Dynamic Graphs.** Dynamic Graphs can be summarized into two categories, CTDGs and DTDGs, depending on whether the entire timestamp is saved (Grover and Leskovec, 2015). A CTDG is composed of an initial graph and a sequence of events, denoted as \((G,S)\), where \(G\) is the initial state of the dynamic graph at time \(t_{0}\) and \(S\) is a set of observed events on the graph. Each event consists of a triplet of _(event type, event, timestamp)_, where the _event type_ can be edge additions, edge deletions, node additions, node deletions, node feature modifications, and so on. Therefore, \(G_{t}\) is the new graph generated from the initial graph \(G\) by sequentially completing the graph events of \(\{t_{1}\sim t\}\). Figure 1(a) shows an example of updating from an empty graph with only five nodes to the graph \(G_{5}\) at time \(t_{5}\), where the graph events involved are:
\[S=\{ (AddEdge,(v_{1},v_{5}),t_{1}),(AddEdge,(v_{2},v_{4}),t_{1}),\] \[(AddEdge,(v_{1},a_{4}),t_{2}),(AddEdge,(v_{3},v_{4}),t_{2}),\] \[(AddEdge,(v_{3},v_{5}),t_{3}),(DeleteEdge,(v_{3},v_{4}),t_{4}),\] \[(AddEdge,(v_{1},v_{3}),t_{5})\}\;.\]
A DTDG is represented as a sequence of snapshots, \(\{G_{0},\ldots,G_{T}\}\), which are sampled at regular time intervals. Figure 1(b) illustrates that the second snapshot, \(G_{1}\), of the DTDG can be considered as the graph snapshot captured by the CTDG in Figure 1(a) at time \(t_{5}\). However, it is important to note that the events occurring between \(t_{1}\) and \(t_{5}\) and their respective order are disregarded. Consequently, the DTDG fails to recognize the existence of the previous edge \((v_{3},v_{4})\) in the graph.
**Graph Learning Tasks.** Node classification and link prediction are traditional learning tasks for static graphs. We assume that each node is tagged with a label \(\mathrm{Y}(i)\) from the label matrix \(\mathbf{Y}\), but only the labels on a subset \(V^{\prime}\subset V\) are known. The objective of the node classification problem is to infer the unknown labels on \(V\setminus V^{\prime}\). In community detection, for instance, the label assigned to each node represents the community to which it belongs. Link prediction is the classical task of graph learning. It predicts whether an edge exists between two nodes that were not initially connected, inferring missing edges in \(E\). In social networks, link prediction is also known as the friend recommendation task, predicting whether a user is interested in another.
Similarly, there are node-level and edge-level prediction tasks for dynamic graphs. Based on the historical information observed so far, we are able to accomplish dynamic node classification and future link prediction defined as follows.
Definition 1 (Dynamic Node Classification).: _For a given graph \(G_{t}=(V_{t},E_{t})\) and the incomplete label matrix \(\mathbf{Y}_{t}^{\prime}\), where \(G_{t}\) can be regarded as the graph at timestamp \(t\) in a CTDG or the \(t\)-th snapshot in a DTDG, and \(\mathbf{Y}_{t}^{\prime}\) associates a subset \(V_{t}^{\prime}\subset V_{t}\) with known class labels at timestamp/snapshot \(t\), dynamic node classification is to classify the remaining nodes with unknown labels and estimate the label matrix \(\mathbf{Y}_{t}\)._
Definition 2 (Future Link Prediction).: _For a given timestamp/snapshot \(t\) and two nodes \(i,j\in V_{t}\), future link prediction aims to predict whether edge \((i,j)\) will be generated in the next timestamp/snapshot or not, based on observations learned from all nodes and their links before timestamp \(t\), i.e. observations of \(\{G_{0},\ldots,G_{t}\}\)._
Figure 1. Two types of dynamic graphs.
Related Works
The Encoder-Decoder framework is a commonly used model in machine learning, which has been applied to various tasks such as unsupervised auto-encoder (He et al., 2017) and neural network machine translation models (Wang et al., 2017). Recently, researchers have demonstrated that the Encoder-Decoder framework can generalize most high-performing dynamic graph learning algorithms (Han et al., 2017; Wang et al., 2017).
### CTDGs Learning Methods
It is important to capture the changes in node embedding caused by every graph event when learning CTDGs. Most methods follow the training strategy that the encoder receives a sequence of graph events as input and reflects their influence in node embeddings. The decoder can therefore be a sequence learning model or a static network such as Multilayer Perceptron (MLP) or Support Vector Machine (SVM).
CTDNE (Krizhevsky et al., 2012) uses the temporal random walk as an encoder and designs three strategies for selecting the next-hop node in dynamic graphs. The introduction of temporal information reduces the uncertainty of embedding, resulting in better performance. The temporal point process is utilized by DyREP (Krizhevsky et al., 2012) to capture temporal changes at the node and graph levels. DyREP (Krizhevsky et al., 2012) builds embeddings of target nodes by aggregating information from neighboring nodes, where neighbors are limited by biasing the hop count selection of the temporal point process. These methods inherit the deficiencies of conventional graph representation learning methods, such as their inability to include node properties.
More prevalent CTDG learning encoders are based on Recurrent Neural Networks (RNNs), where the RNN generates memories from observed events associated with the target node via a memory function. The representative model TGN (Krizhevsky et al., 2012) comprises a memory component and an embedding component, where the memory component stores the historical memory of the given node. JODIE (Krizhevsky et al., 2012), DyREP (Krizhevsky et al., 2012), and TGAT (Wang et al., 2017) can be viewed as variants of TGN (Krizhevsky et al., 2012), and they differ in how they update embeddings and memories. Utilizing an asynchronous mail propagator, APAN (Krizhevsky et al., 2012) enforces that graph events are submitted to the model in timestamped order. Wang et al. (Wang et al., 2017) builds node representations using Causal Anonymous Walks (CAWs), which anonymize the node information on sampled temporal causal routes and apply attention learning to the sampled motifs. The resulting motifs are fed to RNNs to encode each walk as a representation vector. Subsequently, the representations of multiple walks are aggregated into a single vector using a self-attention process for downstream tasks.
Generally, methods specific to CTDGs efficiently learn node embeddings by tracking the impacted nodes for each graph event and updating their embeddings accordingly. However, these methods typically focus on considering the immediate neighbors linked with a graph event, such as the endpoints of an inserted edge, and few consider the impact on second-order neighbors (Beng et al., 2015). Furthermore, the effect on higher-order neighbors or the overall graph is rarely evaluated, and there is limited discussion regarding edge deletions and simultaneous arrivals of multiple events.
### DTDGs Learning Methods
In the DTDGs learning process, temporal patterns are measured by the sequential relationships between snapshots. Some works apply Kalman filtering (Krizhevsky et al., 2012; Krizhevsky et al., 2012) or stacked spatial-temporal graph convolution networks (STGCN) (Krizhevsky et al., 2012) to create dynamic graph embeddings, and then use simple MLPs as decoders to perform the prediction task. More commonly, static methods, such as GAE and VGAE (Vaswani et al., 2017), are used to generate node embeddings of each snapshot. The embeddings are then sorted by time and treated as sequential data, and a sequential decoder is applied to extract the temporal patterns from them.
To obtain embeddings of each snapshot, GraRep (Beng et al., 2015), HOPE (Krizhevsky et al., 2012), and M-NMF (Krizhevsky et al., 2012) construct encoders using matrix decomposition, while DeepWalk (Krizhevsky et al., 2012) and node2vec (Chen et al., 2015) transform the graph structure into node-level embeddings using random walk. These algorithms, however, are shallow embedding methods, meaning that they do not consider the attribute information of the graph. Also, there is no parameter sharing between nodes, which makes these methods computationally inefficient. Graph neural networks are efficient ways for learning both the structure and attribute information of a graph. GNNs follow the message-passing framework, in which each node generates embeddings by aggregating information of neighbors (Chen et al., 2015). To improve the efficiency, Graph Convolutional Network (GCN) (Huang et al., 2016) derives the layer-by-layer propagation formula from the first-order approximation of the localized spectral filters on the graph:
\[\mathbf{H}^{(\ell+1)}=\sigma\left(\mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D }^{-\frac{1}{2}}\mathbf{H}^{(\ell)}\mathbf{W}^{(\ell)}\right) \tag{1}\]
where \(\mathbf{A}\) and \(\mathbf{D}\) are the adjacency matrix and degree matrix, respectively. \(\mathbf{W}^{(\ell)}\) is the learnable parameter of layer \(\ell\), and \(\sigma\) is a nonlinear activation function such as ReLU. \(\mathbf{H}^{(\ell)}\) is the learnt node representation at the \(\ell\)-th layer, and \(\mathbf{H}^{(0)}=\mathbf{X}\). AddGraph (Krizhevsky et al., 2012) employs GCN as the encoder to analyze the structural information of each snapshot, while a sequence decoder is used to determine the relationships between snapshots. Graph Attention Network (GAT) (Chen et al., 2015) is an attention mechanism based on GCN that assigns various weights to the features of neighbors via weighted summation. DyGAT (Deng et al., 2015) employs GAT as an encoder for DTDGs learning, and node embeddings are generated by jointly computing self-attentions of neighborhood structure and time dimensions.
Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 2015) is a widely used sequence model, known for its ability to effectively capture long-term temporal dependencies and correlations. Therefore, Seo et al. (Seo et al., 2017) and Manessi et al. (Mansi et al., 2018) use LSTM as decoders, and their encoders are GCNs or their different versions. The combination structure of GNNs and LSTMs has demonstrated its efficacy in object detection (Krizhevsky et al., 2012) and pandemic forecasting (Krizhevsky et al., 2012) areas. EvolveGCN (Huang et al., 2016) uses LSTM and Gate Recurrent Unit (GRU) to update the GCN's parameters at each snapshot since it focuses on the evolution of the GCN's parameters rather than the node representation at each snapshot. These DTDG-specific methods typically treat each snapshot as a static graph, making it easy for them to address edge deletion and many simultaneous edges. However, they are unable to track the particular impact of each graph event on node embedding, and the recomputation of each snapshot is computationally expensive.
## 4. Decoupled Graph Neural Network
**Overview**. As previously indicated, we aim to design a decoupled GNN with high scalability for dynamic graphs. In addition, we
also require the model to operate multi-event arrivals simultaneously and support edge deletion while keeping tracking changes in node embedding, which incorporates the benefits of the CTDGs-specific and DTDGs-specific models. Therefore, inspired by the scalable static GNN framework (Beng et al., 2017; Wang et al., 2018), we develop a decoupled GNN for large dynamic graphs, in which the dynamic propagation of the graph is decoupled from the prediction process. To enable efficient computation on large-scale dynamic graphs, we employ dynamic propagation with strict error guarantees (as described in Section 4.1). This approach eliminates learning parameters in the propagation process, facilitating independent graph propagation for generating temporal representations of all nodes. The prediction process focuses on learning the underlying graph dynamics from the representations of nodes, which does not contain expensive graph computations, enabling the use of arbitrary learning models, as described in Section 4.2.
**Scalable GNNs.** In order to improve the scalability of GNN models, a line of research tries to decouple the propagation and prediction of conventional GNN layers. The idea behind them is to apply MLPs to batches of nodes simply without taking the graph structure into account, which is proposed by SGC (Wang et al., 2018) first. For implementation, the representation matrix \(\mathbf{Z}\) is generated first following this general formulation propagation:
\[\mathbf{Z}=\sum_{k=0}^{\infty}\gamma_{k}(\mathbf{D}^{-a}\mathbf{A}\mathbf{D}^{ -b})^{k}\mathbf{X}\,, \tag{2}\]
where \(\mathbf{X}\) denotes the input feature matrix, \(a\) and \(b\) are convolution coefficient, \(\gamma_{k}(k=0,1,2,\dots)\) is the weight of the \(k\)-th step convolution. When \(a=b=\frac{1}{2}\) and \(\gamma_{k}=1\), Equation 2 can be considered a GCN with an infinite number of layers, i.e., a stack of infinite layers of Equation 1. However, the parameters of each layer are discarded for better scalability. Therefore, MLPs take the representation matrix \(\mathbf{Z}\) as input and trains for downstream tasks. Mini-batch training can be easily accomplished since node representations can be viewed as distinct input samples for the neural network. Numerous models, including APPNP (Grover et al., 2017), SGC (Wang et al., 2018), and GBP (Chen et al., 2018), can be regarded as versions of Equation 2 constructed by choosing different values for \(a\), \(b\), and \(\gamma_{k}\). By varying \(\gamma_{k}\), Equation 2 can approximate any form of graph filter. For instance, Equation 2 corresponds to a low-pass graph filter when all \(\gamma_{k}(k=0,1,2,\dots)\) satisfy \(\gamma_{k}\geq 0\), and Equation 2 relates to a high-pass filter when \(\gamma_{k}\) is of the form \((-\alpha)^{k}\) with \(\alpha\in(0,1)\). For simplicity, we assume that \(a=\beta\), \(b=1-\beta\), and the sequence of \(\gamma_{k}\) is a geometric progression with a common ratio \(\gamma=\frac{\gamma_{k+1}}{\gamma_{k}}\) and \(0<|\gamma|<1\) in this paper.
We aim to extend the previous concept to dynamic graphs. Firstly, we derive temporal representations for all nodes in the graph based on dynamic approximate propagation, which can be efficiently pre-computed. Next, we batch the structurally enhanced temporal representations of nodes and feed them into the learning model. This decoupling framework, derived from scalable static GNNs, permits the use of any sequence model while preserving high scalability.
**Approximate propagation.** The summation in Equation 2 goes to infinity, which makes it computationally infeasible. Following PPRGo (Beng et al., 2017) and AGP (Shen et al., 2017), we consider its approximate version. By representing each dimension of the feature matrix as an \(n\)-dimensional vector \(\mathbf{x}\), the feature matrix can be turned into a sequence of \(\{\mathbf{x}_{0},...,\mathbf{x}_{d-1}\}\), where the propagation of each vector is conducted independently. Equation 2 can therefore be expressed in a equivalent vector form: \(\mathbf{\pi}=\sum_{k=0}^{\infty}\gamma_{k}(\mathbf{D}^{-\beta}\mathbf{A}\mathbf{D }^{\beta-1})^{k}\mathbf{x}\). As illustrated in Algorithm 1, we generalize the propagation algorithm (Shen et al., 2017) to a weighted version to support weighted graph neural networks and relax the requirement for positive weight coefficients. We denote the approximate solution as \(\hat{\mathbf{\pi}}\), and the cumulative error of all steps is denoted as \(\mathbf{r}\). For initialization, we set \(\hat{\mathbf{\pi}}=0\) and \(\mathbf{r}=\mathbf{x}\). The propagation starts from the node whose residual exceeds the error tolerance \(r_{max}\). Then, the node distributes an equal portion of its residual to its neighboring nodes, and the remainder is transformed into its estimate to record the amount of information already propagated by that node. The feature propagation concludes when the residuals of all graph nodes satisfy the error bound.
```
Input:Graph \(G\), weight coefficients \(\gamma_{k}\), convolutional coefficients \(\beta\), threshold \(r_{max}\), initialized \((\hat{\mathbf{\pi}},\mathbf{r})\)
1whileexist \(i\in V\) with \(|\mathbf{r}(i)|>r_{max}\cdot d(i)^{1-\beta}\)do
2\(\hat{\mathbf{\pi}}(i)\leftarrow\hat{\mathbf{\pi}}(i)+\gamma_{0}\cdot\mathbf{r}(i)\);
3foreach\(j\in N(i)\)do
4\(\mathbf{r}(j)\leftarrow\mathbf{r}(j)+\frac{\gamma_{i}\cdot\gamma_{i(k,j)}\cdot\mathbf{r}(j )}{d(i)^{1-\beta}d(j)^{\beta}}\) ;
5\(\mathbf{r}(i)\gets 0\);
6 return \((\hat{\mathbf{\pi}},\mathbf{r})\);
```
**Algorithm 1**GeneralPropagation
The neural network model receives the structurally improved feature matrix \(\hat{\mathbf{Z}}=(\hat{\mathbf{\pi}}_{0},...,\hat{\mathbf{\pi}}_{d-1})\) as input and is trained to get the final representation of the nodes based on the subsequent task. For instance, the multi-label node classification task typically uses \(\mathbf{Y}=softmax(MLP(\hat{\mathbf{Z}}))\). By decoupling propagation and prediction, the model training complexity is independent of the graph topology, which enhances training efficiency and enables the use of sophisticated prediction networks simultaneously.
### Dynamic Propagation
We consider a dynamic graph \(\mathcal{G}=\{G_{0},G_{1},...,G_{T}\}\), where each \(G_{t}(t\in[0,T])\) is the graph derived from the initial graph in a CTDG after finishing the graph events before timestamp \(t\), or the \(t\)-th snapshot in a DTDG. That is, \(G_{t}\) refers to the \(t\)-th observed status of the dynamic graph. We are not concerned with how \(G_{t}\) is obtained, i.e. how the dynamic graph \(\mathcal{G}\) is stored. The overall update procedure is summarized in Algorithm 2. We obtain the feature propagation matrix for each \(G_{t}\), and \(\mathcal{I}_{t}\) is derived iteratively from \(\mathcal{I}_{t-1}\), as in lines 9-19. The estimated vector \(\hat{\mathbf{\pi}}\) and residual vector \(\mathbf{r}\) inherit the propagation results from the previous time step and make the necessary updates based on the current graph structure.
To construct sequential representations for all nodes in the dynamic graph, it is necessary to comprehend how to quantify the impact that changes to the network have on every node. Therefore, each node should have its individual observation perspective, with a unique comprehension of each graph modification. To improve the computational efficiency, we propose to incrementally compute the node representation when the graph changes. We start with the
following theorem on invariant properties. Due to the page limit, we defer the proof to the technical report (Bartos et al., 2017).
Theorem 1 (The Invariant Property).: _Suppose \(\hat{\mathbf{\pi}}(i)\) is the estimate of node \(i\), \(\mathbf{r}(i)\) is its residual, and \(\mathbf{x}(i)\) is its input feature, for each node \(i\in V\), we notice that \(\hat{\mathbf{\pi}}(i)\) and \(\mathbf{r}(i)\) satisfy the invariant property as follow:_
\[\hat{\mathbf{\pi}}(i)+\gamma_{\mathbf{r}}\mathbf{r}(i)=\gamma_{\mathbf{\alpha}}\mathbf{x}(i)+\sum_{ j\in N(i)}\frac{\gamma\cdot w_{(i,j)}\cdot\hat{\mathbf{\pi}}(j)}{d(i)^{\beta}d(j)^{1- \beta}}. \tag{3}\]
**Generalized update rules.** Without loss of generality, we assume that an edge \((u,v)\) with weight \(w_{(u,v)}\) is inserted to the graph. According to Equation 3, the set of affected nodes is \(V_{A}=\{u,w|w\in N(u)\}\). For node \(u\), the increment caused by the insertion can be quantified as \((\hat{\mathbf{\pi}}(u)+\gamma_{\mathbf{\alpha}}\mathbf{r}(u)-\gamma_{\mathbf{\alpha}}\mathbf{x}(u ))\frac{d(u)^{\beta}-(d(u)+w_{(u,v)})^{\beta}}{\gamma_{\mathbf{\alpha}}(d(u)+w_{( u,v)})^{\beta}}+\frac{\gamma\gamma\mu_{(u,v)}\hat{\mathbf{\pi}}(j)}{\gamma_{\mathbf{ \alpha}}(d(u)+w_{(u,v)})^{\beta}d(j)^{-\beta}}\), since the degree is updated to \(d(u)+w_{(u,v)}\) and a new neighbor \(v\) appears. According to the meaning of estimate and residual, we add this increment to the residual of node \(u\). Similarly, for each node \(w\in N(u)\), \(\frac{\hat{\mathbf{\pi}}(u)}{d(u)^{-\beta}\rho}\) in its equation will be updated to \(\frac{\hat{\mathbf{\pi}}(u)}{(d(u)+w_{(u,v)})^{1-\beta}}\) as a result of the change of node \(u\)'s degree. To guarantee that the update time complexity of each insertion is \(O(1)\), the following updates are performed to prevent alterations to node \(u\)'s neighbors:
* \(\hat{\mathbf{\pi}}(u)=\frac{d(u)^{1-\beta}}{(d(u)+w_{(u,v)})^{1-\beta}}\cdot\hat{ \mathbf{\pi}}(u)\);
* \(\mathbf{r}(u)=\mathbf{r}(u)+\frac{\hat{\mathbf{\pi}}(u)}{\gamma_{\mathbf{\alpha}}}\cdot(\frac {d(u)^{1-\beta}}{(d(u)+w_{(u,v)})^{1-\beta}}-1)\).
The detailed calculation process of the update and its batched version can also be found in the technical report (Bartos et al., 2017). Since none of variables involved in the equation of other nodes have changed, the increment induced by the insertion of the edge \((u,v)\) is zero from the perspective of node \(i\in V,i\neq u\) and \(i\notin N(u)\). Algorithm 1 is then used to propagate this increment from node \(u\) to its neighbors, informing other nodes of the change in the graph.
The preceding procedure can be easily generalized to the case of deleting the edge \((u,v)\) with weight \(w_{(u,v)}\) by simply replacing \((d(u)+w_{(u,v)})\) with \((d(u)-w_{(u,v)})\). Therefore, it is unnecessary to recalculate the feature propagation when the graph changes, but rather obtain the current propagation matrix incrementally based on the past calculation result. In addition, we have Theorem 2 to guarantee the error of propagation on each \(G_{t}\).
Theorem 2 (Error Analysis).: _Suppose \(\hat{\mathbf{\pi}}_{t}(i)\) is the estimate of node \(i\) at time \(t\), \(\mathbf{\pi}_{t}(i)\) is its ground-truth estimate at time \(t\), \(d(i)_{t}\) is its degree at time \(t\), and \(r_{max}\) is the error threshold, for each node \(i\in V\), we have \(|\mathbf{\pi}_{t}(i)-\hat{\mathbf{\pi}}_{t}(i)|\leq r_{max}\cdot d(i)_{t}^{1-\beta}\) holds for \(\forall t\in\{0,1,\ldots,T\}\)._
**Handle CTDGs.** We can utilize the aforementioned update strategy to handle incoming graph events accompanied by either inserting or removing edges. For each arriving edge \((u,v)\), we can ensure that Equation 3 holds at all nodes by updating only the estimate and residual at node \(u\), and the time complexity of the update is \(O(1)\). Therefore, the above update strategy can be well adapted to sequences of frequently arriving graph events in CTDGs.
**Handle CTDGs.** For two successive snapshots \(G_{t-1}\) and \(G_{t}\) in a DTDG, we regard the changes between the two snapshots as graph events arrived simultaneously, and it is easy to statistically extract \(V_{A}\). We can then compute exactly the increment between the two snapshots by substituting \(w_{(u,v)}\) with the degree change \(\Delta d(u)=d(u)_{t}-d(u)_{t-1}\) for each affected node \(u\in V_{A}\). Similarly, Algorithm 1 transmits information about the changes in the graph to other nodes. Therefore, when a new snapshot \(G_{t}\) arrives, we incrementally update the feature propagation matrix based on the feature propagation results of snapshot \(G_{t-1}\). Since batches of graph updates can be efficiently processed, our method naturally supports DTDGs and maintains tracking for underlying node embeddings.
**Remark.** In comparison to CTDG-specific methods, our approach updates the dynamic graph based on timestamps rather than relying solely on the order of each edge. Since it is common for multiple graph events to occur simultaneously at a single time step in real-world scenarios, handling each event individually would be suboptimal. In contrast to DTDG-specific methods, where each graph snapshot is treated as a static graph, we adopt an incremental
update approach, which allows us to update the graph incrementally based on the differences between two successive snapshots. Based on this strategy, we avoid recalculating the underlying node embeddings for each snapshot and disregarding changes to them.
### Prediction
In this section, we provide illustrations of the prediction phase by considering dynamic node classification and future link prediction as examples. These two tasks are defined in detail in Section 2.
**Dynamic node classification.** We can incrementally obtain the feature propagation matrix at each time \(t\) using Algorithm 2. Each row of the feature propagation matrix \(\tilde{Z}_{t}\), denoted as a \(d\)-dimensional vector \(\mathbf{z}_{t,i}\), is the structural-enhanced node representation vector of node \(i\in V\) at time \(t\). Therefore, \(\mathbf{z}_{t,i}\) is the representation vector of node \(i\) under the error tolerance control described in Theorem 2. Since the aggregation of node features based on graph structure has been completed during the propagation process, the node representation vector \(\mathbf{z}_{t,i}(t\!=\!1,\ldots,T)\) for each node \(i\!\in\!V\) can be regarded as the standard input vector of neural networks at this stage. For instance, we use a two-layer MLP to predict the label of node \(i\) at time \(t\) as \(\mathbf{Y}_{t}(i)=softmax(MLP(\mathbf{z}_{t,i}))\).
**Future link prediction.** In this task, we aim to learn the temporal pattern of each node to forecast if the two given nodes would be linked at the given time. The changes in the node representation over time can be regarded as a time series, and the temporal information contained within it can be captured by a common temporal model such as LSTM. Notice that utilizing sequence \(\{\mathbf{z}_{1,i},\ldots,\mathbf{z}_{T,i}\}\) to describe the dynamic network will provide a very subjective impression from node \(i\). As a result, when changes in the graph have a large influence on node \(i\), the representation vector \(\mathbf{z}_{t,i}\) changes significantly with respect to the previous moment \(\mathbf{z}_{t-1,i}\). The vector changes little from that of the previous state \(\mathbf{z}_{t-1,i}\) when changes in the graph have little influence on node \(i\). Note that the degree of influence is related to the final feature propagation matrix generated by Algorithm 2. Therefore, node \(i\)'s perception of the degree of graph change is influenced by the descriptions of its neighboring nodes through the propagation process. The completion of the future link prediction task involves the following three steps.
* **Firstly**, we calculate the difference between each node's state in two consecutive graph states as \(\mathbf{\delta}_{t,i}=g(\mathbf{z}_{t,i},\mathbf{z}_{t-1,i})\), where \(g(\cdot)\) is distance measure function. We implement \(g(\cdot)\) as a simple first-order distance, although it could also be a \(\ell_{2}\)-norm, cosine similarity, or other complicated design. Based on the above, we interpret \(\mathbf{\delta}_{t,i}(s)\) as the score of graph changes from the perspective of node \(i\) at the \(s\)-th feature dimension.
* **Secondly**, sequence models, such as LSTM, directly take the sequence \(\{\mathbf{\delta}_{t,i},\ldots,\mathbf{\delta}_{t,i}\}\) as input to capture the temporal patterns for node \(i\). Since the graph structure information is already included in \(\mathbf{z}_{t,i}\) and \(\mathbf{\delta}_{t,i}\), the sequence model can be employed more effectively by focusing solely on temporal patterns. The predicted state at time \(t\) is denoted as \(\mathbf{h}_{t}=\mathcal{M}(\mathbf{h}_{t-1},\mathbf{\delta}_{t})\), where \(\mathcal{M}\) is the chosen sequence learning model, \(\mathbf{\delta}_{t}\) is the current input vector, and \(\mathbf{h}_{t-1}\) is the learned prior state. The standard LSTM cell is defined by the following formula: \[\begin{split}\mathbf{i}_{t}&=\sigma(\mathbf{W}_{t}\mathbf{ h}_{t-1}+\mathbf{U}_{t}\mathbf{\delta}_{t}+\mathbf{b}_{i}),\\ \mathbf{f}_{t}&=\sigma(\mathbf{W}_{f}\mathbf{h}_{t-1}+\mathbf{ U}_{f}\mathbf{\delta}_{t}+\mathbf{b}_{f}),\\ \mathbf{o}_{t}&=\sigma(\mathbf{W}_{o}\mathbf{h}_{t-1}+ \mathbf{U}_{o}\mathbf{\delta}_{t}+\mathbf{b}_{o}),\\ \tilde{\mathbf{c}}_{t}&=\tanh(\mathbf{W}_{c}\mathbf{h}_{t-1 }+\mathbf{U}_{c}\mathbf{\delta}_{t}+\mathbf{b}_{c}),\\ \mathbf{c}_{t}&=\mathbf{f}_{t}\odot\mathbf{C}_{t-1}+\mathbf{i}_{ t}\odot\tilde{\mathbf{c}}_{t},\\ \mathbf{h}_{t}&=\mathbf{o}_{t}\odot\tanh(\mathbf{c}_{t}),\end{split}\] (4) where \(\sigma\) is the sigmoid activation function, \(\odot\) denotes the matrix product operation, \(\mathbf{i}_{t}\), \(\mathbf{f}_{t}\) and \(\mathbf{o}_{t}\) represent the degree parameters of the input gate, forgetting gate and output gate of the LSTM cell at time \(t\). \(\{\mathbf{W}_{t}\mathbf{U}_{t},\mathbf{b}_{t}\}\), \(\{\mathbf{W}_{f}\mathbf{U}_{f},\mathbf{b}_{f}\}\), \(\{\mathbf{W}_{o}\mathbf{U}_{o},\mathbf{b}_{o}\}\) are their corresponding network parameters, respectively. \(\tilde{\mathbf{c}}_{t}\) denotes the candidate states used to update the cell states. \(\{\mathbf{W}_{c},\mathbf{U}_{c},\mathbf{b}_{c}\}\) are the parameters of the network for generating candidate memories. \(\mathbf{c}_{t}\) is formed as the output vector \(\mathbf{h}_{t}\) at the current time \(t\) after the output gate has discarded some information. Note that the LSTM cell could be replaced by a GRU cell or a Transformer cell, as \(\mathcal{M}\) is free from graph-related computations.
* **Finally**, we combine the pair of hidden states of node \(i\) and \(j\) as \(\mathbf{\varphi}_{t}(i,j)=f(\mathbf{h}_{t,i},\mathbf{h}_{t,j})\), where \(f(\cdot)\) is the combine function, and we experiment on concatenation following previous work (Wang et al., 2019; Wang et al., 2019). Then the probability score of edge \((i,j)\)'s existence at time \(t\) is given by \(\mathbf{Y}_{t}(i,j)=\sigma(MLP(\mathbf{\varphi}_{t}(i,j)))\).
## 5. Experiments
In this section, we evaluate the effectiveness of our method on two representative tasks, future link prediction and dynamic node classification, on both CTDGs and DTDGs. Furthermore, we conduct experiments on two large-scale dynamic graphs to demonstrate the scalability of our method.
**Datasets.** We conducted experiments on seven real-world datasets, including Wikipedia (Han et al., 2017), Reddit (Han et al., 2017), UCI-MSG (Rao et al., 2018), Bitcoin-OTC (Rao et al., 2018; Wang et al., 2019), Bitcoin-Alpha (Wang et al., 2019; Wang et al., 2019), GDELT (Rao et al., 2018) and MAG (Wang et al., 2019; Wang et al., 2019). The statistics of datasets are presented in Table 1. In all graphs, the weight of an edge is determined by its frequency of occurrence. More details about the datasets can be found in the technical report (Bahdan et al., 2018).
**Baseline methods.** We compare our method to state-of-the-art dynamic graph neural networks, including TGN (Wang et al., 2019), CAW-Ns (Wang et al., 2019) for CTDGs and ROLAND (Wang et al., 2019) for DTDGs. In the two CTDG datasets, Wikipedia and Reddit, we strictly inherit the baseline results from their papers and follow the experimental setting of TGN (Wang et al., 2019). In the three DTDG datasets, UCI-Message, Bitcoin-Alpha, and Bitcoin-OTC, our experimental setting is closely related to those of EvolveGCN (Wang et al., 2019) and ROLAND (Wang et al., 2019), and we adopt the original paper's stated results. To provide a fair comparison, we employ the same data processing and partitioning techniques as TGN (Wang et al., 2019) and ROLAND (Wang et al., 2019). For the two large-scale datasets GDELT and MAG, we utilize the results reported by TGL (Rao et al., 2018). Other baseline methods are described in Section 3.
### Experiments on CTDGs
**Experimental Setting.** We conduct experiments on Wikipedia and Reddit dataset in both transductive and inductive settings, following (Wang et al., 2019). In both settings, the first 70% of edges are used as
the training set, 15% are used as the validation set, and the remaining 15% are used as the test set. In the transductive setting, we predict future links for observed nodes in the training set. In the inductive setting, the future linking status of nodes that do not present in the training set is predicted. We formulate the prediction of future links between two nodes as a binary classification problem. More specifically, we assign a label of 1 to indicate that the two nodes will be connected in the future, while a label of 0 signifies that there will be no link between them. The time span of the prediction is one time step. The popular classification metric Average Precision (AP) is employed to evaluate the algorithm's performance on both future link prediction and dynamic node classification tasks. In order to maintain the balance of the data, we generate one negative sample for each test edge or node when computing AP, following the experimental setting in TGN (Zhou et al., 2017) and TGL (Zhou et al., 2017). For our method, we set the weight coefficients \(\gamma_{k}=\alpha(1-\alpha)^{k}\), which is known as the Personalized PageRank weights with a hyperparameter \(\alpha\in(0,1)\). The standard LSTM is utilized as the sequence model to learn the temporal patterns present in the node representation. Since no node features are provided, we use a randomly generated 172-dimensional vector as the initial node feature vector.
**Results.** The results of future link prediction in both transductive and inductive settings are shown in Table 2. The presented results are the average of 10 runs. Our method performs better than baseline methods in both transductive and inductive settings. The interesting thing is that we did not use the provided edge features and achieve comparable or even better performance. This may be strongly related to the experimental setting and dataset. For the current future link prediction, we simply need to forecast whether a connection will be created between two given nodes in the future. The specifics of that link are practically of no concern. The publicly accessible edge features of Wikipedia and Reddit are derived from the textual content of each edit or post on the respective web page and sub-reddit. The learning objective is to detect whether a user would edit a certain page or post on a given sub-reddit in the future, without predicting the edit or post content. It is possible that semantic information of textual material is superfluous. The historical interaction data already contains sufficient information to reveal users' preferences for particular pages and sub-reddits. Our hypothesis is also supported by the results of our method on graphs that lack semantic information.
Table 3 shows the experimental results for the dynamic node classification. For node classification, we always use the most recent node representation based on the history observed so far. The three-layer MLP is employed as the classifier. The results in Table 3 show that our method effectively captures the temporal changes of the nodes in time, thus enabling the correct classification of the nodes.
### Experiments on DTDGs
**Experimental Setting.** We use three datasets in this experiment: Bitcoin-OTC, Bitcoin-Alpha and UCI-Message. To ensure a fair comparison, we partition the dataset and calculate evaluation measures in the same manner as ROLAND (Zhou et al., 2017). Since node features and edge features are not provided in these three datasets, we generate the 128-dimensional random vector to serve as the initial node feature. The ranking metric, Mean Reciprocal Rank (MRR), is employed to evaluate performance. We collect 1000 negative samples for each positive sample and then record the ranking of positive samples according to predicted probabilities. MRR is calculated independently for each snapshot in the test set, and the average of all snapshots is reported. For our method, we combine the node temporal representations obtained under settings \(\gamma_{k}=\alpha(1-\alpha)^{k}\) and \(\gamma_{k}=\alpha(\alpha-1)^{k}\) to approximate the low-pass and high-pass filters on the graph and
\begin{table}
\begin{tabular}{l|l l l l l} \hline \hline & \#nodes & \#edges & max(\(t\)) & \#classes & \#node features & \#edge features \\ \hline Wikipedia & 9,227 & 157,474 & 152,757 & 2 & 172 (random) & 172 \\ Reddit & 11,000 & 672,447 & 669,065 & 2 & 172 (random) & 172 \\ UCI-Message & 1,899 & 59,835 & 87 & - & 128 (random) & - \\ Bitcoin-OTC & 5,881 & 35,592 & 138 & - & 128 (random) & 1 \\ Bitcoin-Alpha & 3,783 & 24,186 & 138 & - & 128 (random) & 1 \\ GDELT & 16,682 & 191,290,882 & 170,522 & 81 & 413 & 186 \\ MAG & 121,751,665 & 1,297,748,926 & 120 & 152 & 768 & - \\ \hline \hline \end{tabular}
\end{table}
Table 1. Statistics of the datasets.
\begin{table}
\begin{tabular}{c|c c} \hline & Wikipedia & Reddit \\ \hline Jodie & 81.37 & **70.91** \\ DySAT & 86.30 & 61.70 \\ TGAT & 85.18 & 60.61 \\ TGN & 88.33 & 63.78 \\ APAN & 82.54 & 62.00 \\
**ours** & **89.81** & 67.53 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Dynamic node classification on CTDGs. ROC AUCs (%) are exhibited.
\begin{table}
\begin{tabular}{c|c c|c c} \hline & \multicolumn{2}{c}{Wikipedia} & \multicolumn{2}{c}{Reddit} \\ \hline & Transductive & Inductive & Transductive & Inductive \\ \hline GAE & 91.44 \(\pm\) 0.1 & - & 93.23 \(\pm\) 0.3 & - \\ VGAE & 91.34 \(\pm\) 0.3 & - & 92.92 \(\pm\) 0.2 & - \\ DeepWalk & 90.71 \(\pm\) 0.6 & - & 83.10 \(\pm\) 0.5 & - \\ Node2Vec & 91.48 \(\pm\) 0.3 & - & 84.58 \(\pm\) 0.5 & - \\ GAT & 94.73 \(\pm\) 0.2 & 91.27 \(\pm\) 0.4 & 97.33 \(\pm\) 0.2 & 95.37 \(\pm\) 1.1 \\ GraphSAGE & 93.56 \(\pm\) 0.2 & 91.09 \(\pm\) 0.3 & 97.65 \(\pm\) 0.2 & 96.27 \(\pm\) 0.2 \\ CTDNE & 92.17 \(\pm\) 0.5 & - & 91.41 \(\pm\) 0.3 & - \\ Jodie & 94.62 \(\pm\) 0.5 & 93.11 \(\pm\) 0.4 & 97.11 \(\pm\) 0.3 & 94.36 \(\pm\) 1.1 \\ TGAT & 95.34 \(\pm\) 0.1 & 93.99 \(\pm\) 0.3 & 98.12 \(\pm\) 0.2 & 96.62 \(\pm\) 0.3 \\ DyRep & 94.59 \(\pm\) 0.2 & 92.05 \(\pm\) 0.3 & 97.98 \(\pm\) 0.1 & 95.68 \(\pm\) 0.2 \\ TGN & 98.46 \(\pm\) 0.1 & 97.81 \(\pm\) 0.1 & 98.70 \(\pm\) 0.1 & 97.55 \(\pm\) 0.1 \\ CAW-N-mean & 98.28 \(\pm\) 0.1 & 98.28 \(\pm\) 0.1 & 98.72 \(\pm\) 0.1 & 98.74 \(\pm\) 0.1 \\ CAW-N-attn & 98.84 \(\pm\) 0.1 & 98.31 \(\pm\) 0.1 & 98.80 \(\pm\) 0.1 & 98.77 \(\pm\) 0.1 \\
**ours** & **99.16 \(\pm\) 0.3** & **98.54 \(\pm\) 0.2** & **99.51 \(\pm\) 0.5** & **98.81 \(\pm\) 0.6** \\ \hline \hline \end{tabular}
\end{table}
Table 2. Future link prediction on CTDGs. AP (%) \(\pm\) standard deviations computed of 10 random seeds are exhibited.
introduce low-frequency and high-frequency information, respectively. Three traditional sequence models, LSTM (He et al., 2017), GRU (He et al., 2017) and Transformer (Zhu et al., 2017), are used to finish the future link prediction task.
**Results.** The results in Table 4 demonstrate the state-of-the-art performances of our method. In the Bitcoin-OTC dataset, our method outperforms the second-best method ROLAND by 41%. The results show that the LSTM model consistently outperforms the GRU model, which is possibly due to the LSTM having more parameters. The Transformer model tends to achieve a higher MRR, which could be attributed to its holistic approach to the temporal sequence of nodes and reduced reliance on previous hidden states. Additionally, we conducted an ablation study in the technical report (He et al., 2017) to validate the necessity of introducing high-frequency information.
### Experiments on Large Graphs
**Experimental Setting.** To demonstrate the scalability of our method, we conduct experiments on two large-scale real-world graphs, GDELT and MAG. We exclude EvolveGCN (Zhu et al., 2017), ROLAND (Zhu et al., 2017) and CAW-Ns (Zhu et al., 2017) from experiments on GDELT and MAG datasets, since they met out-of-memory issues on both datasets. Note that the scalability of the baseline methods JODIE (Zhu et al., 2017), DySAT (Zhu et al., 2017), TGAT (Zhu et al., 2017), TGN (Zhu et al., 2017), and APAN (Zhu et al., 2017) was not taken into account in their original papers, and their original versions cannot be trained on these two large-scale dynamic graphs. TGL (Zhu et al., 2017) has successfully applied these methods to large-scale graphs by developing a distributed dynamic graph neural network training framework. In contrast, our method can learn large-scale graphs directly. Our method exhibits greater scalability due to the elimination of parameters in the propagation process, allowing the training on these two large-scale graphs to be completed on a single machine. We validate the performance of all methods on the dynamic node classification task and compare their performance using the multiple-class classification metric F1-Micro. To guarantee a fair comparison, we ensure that the training, validation and test sets are consistent with the settings in TGL. For our method, we set \(\gamma_{k}=\alpha(1-\alpha)^{k}\) to obtain the temporal representation of each node, and a three-layer MLP is utilized to complete the training for the classification task.
**Results.** Table 5 shows the dynamic node classification results. Compared to baseline methods, we achieve significant performance improvement in both datasets. Specifically, our method improves F1-Micro by 13.6 on the GDELT dataset and 9.68 on the MAG dataset. This indicates that our method can effectively capture the dynamic changes in node representations by precisely locating the directly affected nodes via Equation 3 and quantifying the degree of graph change. The following propagation process that immediately follows broadcasts the change from the affected node to its surroundings, so that higher-order neighbors can also naturally perceive the change on the graph. However, from a practical application standpoint, the performance of all methods in GDELT is not adequate. We note that this is due to the presence of much noise in the labeled data of GDELT. Participants can participate in events held around the world via remote means such as online, resulting in some nodes may simultaneously belong to many classes.
## 6. Conclusion
This paper propose a universal general graph neural network for dynamic graphs that can extract the structural and attribute information of the graph, as well as the temporal information. Our algorithm is based on the framework of decoupled GNNs, which can pre-compute temporal propagation in dynamic graphs and then train them for downstream tasks depending on the nodes' temporal representation. We devised a unified dynamic propagation methods to support the learning on both continuous-time and discrete-time dynamic graphs. Empirical studies on continuous-time and discrete-time dynamic graphs at various scales demonstrate the scalability and state-of-the-art performance of our algorithm.
###### Acknowledgements.
This research was supported in part by National Key R&D Program of China (2022ZD0114802), by National Natural Science Foundation of China (No. U2241212, No. 61972401, No. 61932001, No. 61832017),
\begin{table}
\begin{tabular}{c|c c c} \hline \hline & \multicolumn{1}{c}{UCI-Message} & Bitcoin-Alpha & Bitcoin-OTC \\ \hline GCN & 0.1141 & 0.0031 & 0.0025 \\ DynGEM & 0.1055 & 0.1287 & 0.0921 \\ dyngraph2vecAE & 0.0540 & 0.1478 & 0.0916 \\ dyngraph2vecAERNN & 0.0713 & 0.1945 & 0.1268 \\ EvolveGCN-H & 0.0899 & 0.1104 & 0.0690 \\ EvolveGCN-O & 0.1379 & 0.1185 & 0.0968 \\ ROLAND Moving Average & 0.0649 \(\pm\) 0.0049 & 0.1399 \(\pm\) 0.0107 & 0.0468 \(\pm\) 0.0022 \\ ROLAND MLP & 0.0875 \(\pm\) 0.0110 & 0.1561 \(\pm\) 0.0114 & 0.0778 \(\pm\) 0.0024 \\ ROLAND GRU & 0.2289 \(\pm\) 0.0618 & 0.2885 \(\pm\) 0.0123 & 0.2203 \(\pm\) 0.0167 \\ \hline & GRU & 0.2024 \(\pm\) 0.0010 & 0.3289 \(\pm\) 0.0070 & 0.2985 \(\pm\) 0.0121 \\
**ours** & LSTM & 0.2140 \(\pm\) 0.0034 & **0.3405 \(\pm\) 0.0133** & 0.3102 \(\pm\) 0.0046 \\ & Transformer & **0.2314 \(\pm\) 0.0048** & 0.3173 \(\pm\) 0.0135 & **0.3110 \(\pm\) 0.0049** \\ \hline \hline \end{tabular}
\end{table}
Table 4. Future link prediction on DTDGs. MRR \(\pm\) standard deviations computed of 3 random seeds are exhibited.
\begin{table}
\begin{tabular}{c|c c} \hline \hline & GDELT & MAG \\ \hline Jodie & 11.25 & 43.94 \\ DySAT & 10.05 & 50.42 \\ TGAT & 10.04 & 51.72 \\ TGN & 11.89 & 49.20 \\ APAN & 10.03 & - \\
**ours** & **25.49** & **61.40** \\ \hline \hline \end{tabular}
\end{table}
Table 5. Dynamic node classification on large graphs. F1-Micros (%) are exhibited.
by the major key project of PCL (PCL2021A12), by Beijing Natural Science Foundation (No. 4222028), by Beijing Outstanding Young Scientist Program No.BJJWZYJH012019100020098, by Alibaba Group through Alibaba Innovative Research Program, and by Huawei-Renmin University joint program on Information Retrieval. Jiajun Liu was supported in part by CSIRO's Science Leader project R-91559. We also wish to acknowledge the support provided by Engineering Research Center of Next-Generation Intelligent Search and Recommendation, Ministry of Education. Additionally, we acknowledge the support from Intelligent Social Governance Interdisciplinary Platform, Major Innovation & Planning Interdisciplinary Platform for the "Double-First Class" Initiative, Public Policy and Decision-making Research Lab, Public Computing Cloud, Renmin University of China.
|
2302.00878 | The Contextual Lasso: Sparse Linear Models via Deep Neural Networks | Sparse linear models are one of several core tools for interpretable machine
learning, a field of emerging importance as predictive models permeate
decision-making in many domains. Unfortunately, sparse linear models are far
less flexible as functions of their input features than black-box models like
deep neural networks. With this capability gap in mind, we study a not-uncommon
situation where the input features dichotomize into two groups: explanatory
features, which are candidates for inclusion as variables in an interpretable
model, and contextual features, which select from the candidate variables and
determine their effects. This dichotomy leads us to the contextual lasso, a new
statistical estimator that fits a sparse linear model to the explanatory
features such that the sparsity pattern and coefficients vary as a function of
the contextual features. The fitting process learns this function
nonparametrically via a deep neural network. To attain sparse coefficients, we
train the network with a novel lasso regularizer in the form of a projection
layer that maps the network's output onto the space of $\ell_1$-constrained
linear models. An extensive suite of experiments on real and synthetic data
suggests that the learned models, which remain highly transparent, can be
sparser than the regular lasso without sacrificing the predictive power of a
standard deep neural network. | Ryan Thompson, Amir Dezfouli, Robert Kohn | 2023-02-02T05:00:29Z | http://arxiv.org/abs/2302.00878v4 | # The Contextual Lasso: Sparse Linear Models via Deep Neural Networks
###### Abstract
Sparse linear models are a gold standard tool for interpretable machine learning, a field of emerging importance as predictive models permeate decision-making in many domains. Unfortunately, sparse linear models are far less flexible as functions of their input features than black-box models like deep neural networks. With this capability gap in mind, we study a not-uncommon situation where the input features dichotomize into two groups: explanatory features, which we wish to explain the model's predictions, and contextual features, which we wish to determine the model's explanations. This dichotomy leads us to propose the contextual lasso, a new statistical estimator that fits a sparse linear model whose sparsity pattern and coefficients can vary with the contextual features. The fitting process involves learning a nonparametric map, realized via a deep neural network, from contextual feature vector to sparse coefficient vector. To attain sparse coefficients, we train the network with a novel lasso regularizer in the form of a projection layer that maps the network's output onto the space of \(\ell_{1}\)-constrained linear models. Extensive experiments on real and synthetic data suggest that the learned models, which remain highly transparent, can be sparser than the regular lasso without sacrificing the predictive power of a standard deep neural network.
Machine Learning, ICML
## 1 Introduction
Sparse linear models--linear predictive functions in a small subset of features--have a long and rich history in statistics, dating back at least to the 1960s (Garside, 1965). Nowadays, against the backdrop of elaborate, black-box models such as deep neural networks, the appeal of sparse linear models is largely their transparency and intelligibility. These qualities are highly-sought in decision-making settings (e.g., consumer finance and criminal justice) and constitute the foundation of interpretable machine learning, a topic that has recently received significant attention (Murdoch et al., 2019; Marcinkevics and Vogt, 2020; Molnar et al., 2020; Rudin et al., 2022). Interpretability, however, comes at a price when the underlying phenomenon cannot be predicted accurately without a more expressive model capable of well-approximating complex functions, such as a neural network. Unfortunately, one must forgo direct interpretation of expressive models and instead resort to post hoc explanations (Ribeiro et al., 2016; Lundberg and Lee, 2017), which have flaws of their own (Laugel et al., 2019; Rudin, 2019).
Motivated by a desire for interpretability and expressivity, this paper focuses on a statistical learning setting where sparse linear models and neural networks can collaborate together. The setting is characterized by a not-uncommon situation where the input features naturally dichotomize into two groups, which we call explanatory features and contextual features. Explanatory features are features whose effects are of primary interest. They should be modeled via a low-complexity function such as a sparse linear model for interpretability. On the other hand, contextual features describe the broader predictive context, e.g., the location of the prediction in time or space, as in the house pricing example below. These inform which explanatory features are relevant and, for those that are, the sign and magnitude of their linear effects. Given this crucial role, contextual features are best modeled via an expressive function class.
The explanatory-contextual feature dichotomy described above leads to the seemingly previously unstudied contextually sparse linear model:
\[g\left(\operatorname{E}[y\,|\,\mathbf{x},\mathbf{z}]\right)=\sum_{j\in S( \mathbf{x})}x_{j}\beta_{j}(\mathbf{z}). \tag{1}\]
To parse the notation, \(y\in\mathbb{R}\) is a response variable, \(\mathbf{x}=(x_{1},\ldots,x_{p})^{\top}\in\mathbb{R}^{p}\) are explanatory features, \(\mathbf{z}=(z_{1},\ldots,z_{m})^{\top}\in\mathbb{R}^{m}\) are contextual features, and \(g\) is a link function (e.g., identity for regression or logit for classification).1 Via the contextual features, the set-valued function \(S(\mathbf{z})\) encodes the indices of the relevant explanatory features (typically, a small set of \(j\)'s), while the coefficient functions \(\beta_{j}(\mathbf{z})\) encode the effects of those relevant features. The
model (1) draws inspiration from the varying-coefficient model (Hastie and Tibshirani, 1993; Fan and Zhang, 2008; Park et al., 2015), a special case that assumes all explanatory features are always relevant, i.e., \(S(\mathbf{z})=\{1,\ldots,p\}\) for all \(\mathbf{z}\in\mathbb{R}^{m}\). We show throughout the paper that this new model is powerful in various decision-making settings, including energy forecasting and news optimization. For these tasks, sparsity patterns can be strongly context-dependent.
The main contribution of our paper is a new statistical estimator for (1) called the contextual lasso. The new estimator is inspired by the lasso (Tibshirani, 1996), a classic sparse learning tool with excellent properties (Hastie et al., 2015). Whereas the lasso fits a sparse linear model that fixes the relevant features and coefficients once and for all (i.e., \(S(\mathbf{z})\) and \(\beta_{j}(\mathbf{z})\) are constant), the contextual lasso fits a contextually sparse linear model that allows the relevant explanatory features and coefficients to change according to the prediction context. To learn the map from contextual feature vector to sparse coefficient vector, we use the expressive power of neural networks. Specifically, we train a feedforward neural network to output a vector of linear model coefficients sparsified via a novel lasso regularizer. In contrast to the lasso, which constraints the coefficient's \(\ell_{1}\)-norm, our regularizer constraints the _expectation_ of the coefficient's \(\ell_{1}\)-norm with respect to \(\mathbf{z}\). To implement this new regularizer, we include a novel projection layer at the bottom of the network that maps the network's output onto the space of \(\ell_{1}\)-constrained linear models by solving a constrained quadratic program.
To briefly illustrate our proposal, we consider data on property sales in Beijing, China, studied in Zhou and Hooker (2022). We use the contextual lasso to learn a pricing model with longitude and latitude as contextual features. The response is price per square meter. Figure 1 plots the fitted model with five property attributes (explanatory features) and an intercept. The relevance and effect of these attributes can vary greatly with location. The elevator indicator, e.g., is irrelevant throughout inner Beijing, where buildings tend to be older and typically do not have elevators. The absence of elevators also yields higher floors undesirable, hence the negative effect of floor on price. Beyond the inner city, the floor is irrelevant. Naturally, renovations are valuable everywhere, but more so for older buildings in the inner city than elsewhere. Meanwhile, the number of living rooms and bathrooms are not predictive of price per square meter anywhere, so they remain inactive. The flexibility of the contextual lasso to add or remove attributes by location equips sellers with personalized, readily interpretable linear models containing only attributes most relevant to them.
The rest of the paper is organized as follows. Section 2 introduces the contextual lasso and describes techniques for its computation. Section 3 discusses connections with earlier work. Section 4 reports extensive experimental analyses on synthetic data. Section 5 presents two applications to real-world data. Section 6 concludes the paper.
## 2 Contextual Lasso
This section describes our estimator. To facilitate exposition, we first rewrite the contextually sparse linear model (1) more concisely as
\[g\left(\mathrm{E}[y\,|\,\mathbf{x},\mathbf{z}]\right)=\mathbf{x}^{\top} \boldsymbol{\beta}(\mathbf{z}).\]
The notation \(\boldsymbol{\beta}(\mathbf{z}):=\left(\beta_{1}(\mathbf{z}),\ldots,\beta_{p}( \mathbf{z})\right)^{\top}\) represents a vector coefficient function which is sparse over its domain. That is, for different values of \(\mathbf{z}\), the output of \(\boldsymbol{\beta}(\mathbf{z})\) con
Figure 1: Coefficients as a function of longitude and latitude for the estimated house pricing model. Colored points indicate values of coefficients at different locations. Grey points indicate locations where coefficients are zero.
tains zeros at different positions. The function \(S(\mathbf{z})\), which encodes the set of active explanatory features in (1), is recoverable as \(S(\mathbf{z}):=\{j:\beta_{j}(\mathbf{z})\neq 0\}\).
### Problem Formulation
The contextual lasso, at the population level, comprises a minimization of the expectation of a loss function subject to an inequality on the expectation of a constraint function:
\[\begin{split}\min_{\mathbf{\beta}\in\mathcal{F}}& \quad\mathrm{E}\left[l\left(\mathbf{x}^{\top}\mathbf{\beta}(\mathbf{z}),y \right)\right]\\ \mathrm{s.\,t.}&\quad\mathrm{E}\left[\|\mathbf{\beta}( \mathbf{z})\|_{1}\right]\leq\lambda,\end{split} \tag{2}\]
where the set \(\mathcal{F}\) is a class of functions that constitute feasible solutions and \(l:\mathbb{R}^{2}\rightarrow\mathbb{R}\) is the loss function, e.g., square loss \(l(z,y)=(y-z)^{2}\) for regression or logistic loss \(l(z,y)=-y\log(z)-(1-y)\log(1-z)\) for classification. Here, the expectations are taken with respect to the random variables \(y\), \(\mathbf{x}\), and \(\mathbf{z}\). The parameter \(\lambda>0\) controls the level of regularization. Smaller values of \(\lambda\) encourage \(\mathbf{\beta}(\mathbf{z})\) to contain more zeros over its domain. Larger values have the opposite effect. The contextual lasso thus differs from the regular lasso, which learns \(\mathbf{\beta}(\mathbf{z})\) as a constant function:
\[\begin{split}\min_{\mathbf{\beta}}&\quad\mathrm{E} \left[l\left(\mathbf{x}^{\top}\mathbf{\beta},y\right)\right]\\ \mathrm{s.\,t.}&\quad\|\mathbf{\beta}\|_{1}\leq\lambda. \end{split}\]
To reiterate the difference: the lasso coaxes _the parameter \(\mathbf{\beta}\)_ towards zero, while the contextual lasso coaxes the _expectation of the function \(\mathbf{\beta}(\mathbf{z})\)_ to zero. The result for the latter is coefficients that can change in value and sparsity with \(\mathbf{z}\).
Given a sample \((y_{i},\mathbf{x}_{i},\mathbf{z}_{i})_{i=1}^{n}\), the data version of the population problem (2) replaces the unknown expectations with their sample counterparts:
\[\begin{split}\min_{\mathbf{\beta}\in\mathcal{F}}& \quad\frac{1}{n}\sum_{i=1}^{n}l\left(\mathbf{x}_{i}^{\top}\mathbf{ \beta}(\mathbf{z}_{i}),y_{i}\right)\\ \mathrm{s.\,t.}&\quad\frac{1}{n}\sum_{i=1}^{n}\| \mathbf{\beta}(\mathbf{z}_{i})\|_{1}\leq\lambda.\end{split} \tag{3}\]
The set of feasible solutions to optimization problem (3) are coefficient functions that lie in the \(\ell_{1}\)-ball of radius \(\lambda\) when averaged over the observed data.2 To operationalize this estimator, we take the function class \(\mathcal{F}\) to be the family of neural networks parameterized by weights \(\mathbf{w}\), denoted \(\mathbf{\beta}_{\mathbf{w}}\). This choice leads to our core proposal:
Footnote 2: The \(\ell_{1}\)-ball is the convex compact set \(\{\mathbf{x}\in\mathbb{R}^{p}:\|\mathbf{x}\|_{1}\leq\lambda\}\).
\[\begin{split}\min_{\mathbf{w}}&\quad\frac{1}{n}\sum_ {i=1}^{n}l\left(\mathbf{x}_{i}^{\top}\mathbf{\beta}_{\mathbf{w}}(\mathbf{z}_{i}),y _{i}\right)\\ \mathrm{s.\,t.}&\quad\frac{1}{n}\sum_{i=1}^{n}\| \mathbf{\beta}_{\mathbf{w}}(\mathbf{z}_{i})\|_{1}\leq\lambda.\end{split} \tag{4}\]
Training a neural network such that its outputs satisfy the \(\ell_{1}\)-constraint is not trivial. We introduce a novel network architecture that addresses this challenge.
### Network Architecture
The neural network architecture--depicted in Figure 2--involves two key components. The first and most straightforward component is a feedforward network \(\mathbf{\eta}(\mathbf{z}):=\left(\eta_{1}(\mathbf{z}),\ldots,\eta_{p}(\mathbf{z} )\right)^{\top}\) comprised of \(p\) fully-connected subnetworks. These subnetworks each tend to the coefficient function of a single explanatory feature. The purpose of the subnetworks is to capture the nonlinear effects of the contextual features on the explanatory features. Since these subnetworks involve only hidden layers with standard affine transformations and nonlinear maps (e.g., rectified linear activations), the coefficients they produce generally do not satisfy the contextual lasso constraint and hence are not sparse. To enforce the constraint, we employ a novel projection layer as the second component of our network.
The projection layer takes the dense coefficients \(\mathbf{\eta}(\mathbf{z})\) from the subnetworks and maps them to sparse coefficients \(\mathbf{\beta}(\mathbf{z}):=\left(\beta_{1}(\mathbf{z}),\ldots,\beta_{p}(\mathbf{ z})\right)^{\top}\) by performing an orthogonal projection onto the \(\ell_{1}\)-ball. Because the contextual lasso does not constrain each coefficient vector to the \(\ell_{1}\)-ball, but rather constrains the _average_ coefficient vector, we project all \(n\) coefficient vectors \(\mathbf{\eta}(\mathbf{z}_{1}),\ldots,\mathbf{\eta}(\mathbf{z}_{n})\) together. That is, we take the final sparse coefficients \(\mathbf{\beta}(\mathbf{z}_{1}),\ldots,\mathbf{\beta}(\mathbf{z}_{n})\) as the minimizing arguments of a constrained quadratic program:
\[\begin{split}\mathbf{\beta}(\mathbf{z}_{1}),\ldots,\mathbf{\beta}( \mathbf{z}_{n}):=\\ \underset{\mathbf{\beta}_{1},\ldots,\mathbf{\beta}_{n}:\frac{1}{n}\sum_ {i=1}^{n}\|\mathbf{\beta}_{i}\|_{1}\leq\lambda}{\arg\min}&\quad\frac{1}{ n}\sum_{i=1}^{n}\|\mathbf{\eta}(\mathbf{z}_{i})-\mathbf{\beta}_{i}\|_{2}^{2}.\end{split} \tag{5}\]
The minimizers of this optimization problem are typically sparse thanks to the geometry of the \(\ell_{1}\)-ball. The idea of including optimization as a layer in a neural network is explored in previous works (Amos and Kolter, 2017; Agrawal et al., 2019). Yet, to our knowledge, no previous work has studied optimization layers for inducing sparsity.
The program (5) does not admit an analytical solution, though it is solvable by general purpose convex optimization algorithms (see, e.g., Boyd and Vandenberghe, 2004). However, because (5) is a highly structured problem, it is also amenable to more specialized algorithms. Such algorithms facilitate the type of scalable computation necessary for deep learning. Duchi et al. (2008) provide a low-complexity algorithm for solving (5) when \(n=1\). Algorithm 1 below is an extension to \(n\geq 1\). The algorithm consists of two main steps: (1) computing a thresholding parameter \(\theta\) and (2) soft-thresholding the inputs using the computed \(\theta\). Critically, the operations comprising Algorithm 1 are scalable on a GPU. Moreover, we can differentiate through these
operations, allowing end-to-end training of the model.
Computation of the thresholding parameter is performed only during training. For inference, the estimate \(\hat{\theta}\) from the training set is used for soft-thresholding. That is, rather than using Algorithm 1 as an activation function when performing inference, we use \(T(x):=\operatorname{sign}(x)\max(|x|-\hat{\theta},0)\). The purpose of using the estimate \(\hat{\theta}\) rather than recomputing \(\theta\) via Algorithm 1 is because the \(\ell_{1}\)-constraint applies to the _expected_ coefficient vector. It need not be the case that every coefficient vector produced at inference time lies in the \(\ell_{1}\)-ball, which would occur if Algorithm 1 is rerun.
### Side Constraints
Besides the contextual lasso constraint, our architecture accommodates side constraints on \(\mathbf{\beta}(\mathbf{z})\) via modifications to Algorithm 1. For instance, we follow Zhou and Hooker (2022) in the housing example (Figure 1) and constrain the coefficients on the elevator, renovation, living room, and bathroom features to be nonnegative. Such sign constraints reflect domain knowledge that these features should not impact price negatively. Appendix A describes this extension.
### Pathwise Optimization
The lasso regularization parameter \(\lambda\) controlling the size of the \(\ell_{1}\)-ball and thus the sparsity of the coefficient vectors is typically treated as a tuning parameter. For this reason, algorithms for the lasso typically do not return a model for a single value of \(\lambda\) but instead return multiple models with varying \(\lambda\), which can then be compared, e.g., using cross-validation (Friedman et al., 2010). Towards this end, it can be computationally efficient to compute multiple models pathwise by sequentially warm-starting the optimizer. As Friedman et al. (2007) point out, pathwise computation for many values of \(\lambda\) can be as fast as for a single \(\lambda\).
The contextual lasso benefits from pathwise optimization in even more ways than the lasso. Warm starts reduce overall runtime compared with initializing at random weights (running for a sequence of \(\lambda\) is the same order as for a single \(\lambda\)). More importantly, however, pathwise optimization improves the training quality. This last advantage, which is irrelevant for the lasso, is a consequence of the network's nonconvex optimization surface. Building up a sophisticated network from a simple one helps the optimizer navigate this surface. Lemhadri et al. (2021) note a similar benefit from pathwise optimization with their regularized neural networks.
In a spirit similar to Friedman et al. (2007), we take the sequence of regularization parameters \(\{\lambda^{(t)}\}_{t=1}^{T}\) as a grid of values that yields a path between the unregularized model (no sparsity) and the fully regularized model (all coefficients zero). Specifically, we set \(\lambda^{(1)}\) such that the contextual lasso regularizer does not impart any regularization, i.e., \(\lambda^{(1)}=n^{-1}\sum_{i=1}^{n}\|\mathbf{\beta}_{\hat{\mathbf{w}}^{(1)}}( \mathbf{z}_{i})\|_{1}\), where the weights \(\hat{\mathbf{w}}^{(1)}\) are a solution to (4) from setting \(\lambda=\infty\). We then construct the sequence as a grid of linearly spaced values between \(\lambda^{(1)}\) and \(\lambda^{(T)}=0\), the latter forcing all coefficients to zero. It is critical here the sequence of \(\lambda^{(t)}\) is decreasing so the optimizer can build on networks that increase in sparsity.
Algorithm 2 summarizes the complete pathwise optimization process outlined above, with gradient descent employed as the optimizer. To parse the notation used in the algorithm, \(L(\mathbf{w};\lambda)=n^{-1}\sum_{i=1}^{n}l(\mathbf{x}_{i}^{\top}\mathbf{\beta}_{ \mathbf{w}}(\mathbf{z}_{i}),y_{i})\) represents
the loss as a function of the network's weights \(\mathbf{w}\) given \(\lambda\), and \(\nabla_{\mathbf{w}}L(\mathbf{w};\lambda)\) represents the associated gradient.
### Relaxed Fit
A possible drawback to the contextual lasso, and indeed all lasso estimators, is bias of the linear model coefficients towards zero. This bias, which is a consequence of shrinkage from the \(\ell_{1}\)-norm, can help or hinder depending on the data. Typically, bias is beneficial when the number of samples is low or the level of noise is high, while the opposite is true in the converse situation (see, e.g., Hastie et al., 2020). This consideration motivates us to consider a relaxation of the contextual lasso that unwinds some, or all, of the bias imparted by the \(\ell_{1}\)-norm. We present here an approach that extends the proposal of Hastie et al. (2020) for relaxing the lasso. Their relaxation, which simplifies an earlier proposal by Meinshausen (2007), involves a convex combination of the lasso's coefficients and "polished" coefficients from an unregularized least squares fit on the lasso's selected features. Our proposal extends this idea from the lasso's fixed coefficients to the contextual lasso's varying coefficients.
Denote by \(\hat{\boldsymbol{\beta}}_{\lambda}(\mathbf{z})\) a contextual lasso network fit with regularization parameter \(\lambda\). To unwind bias in \(\hat{\boldsymbol{\beta}}_{\lambda}(\mathbf{z})\), we train a polished network \(\boldsymbol{\beta}_{\lambda}^{p}(\mathbf{z})\) that selects the same explanatory features but does not impose any shrinkage. For this task, we introduce the function \(\hat{\mathbf{s}}_{\lambda}(\mathbf{z}):\mathbb{R}^{m}\rightarrow\{0,1\}^{p}\) that outputs a vector with elements equal to one wherever \(\hat{\boldsymbol{\beta}}_{\lambda}(\mathbf{z})\) is nonzero and elsewhere is zero. We then fit the polished network as \(\boldsymbol{\beta}_{\lambda}^{p}(\mathbf{z})=\boldsymbol{\eta}(\mathbf{z}) \circ\hat{\mathbf{s}}_{\lambda}(\mathbf{z})\), where \(\circ\) means element-wise multiplication and \(\boldsymbol{\eta}(\mathbf{z})\) is the same architecture as used for the original contextual lasso network before the projection layer. The effect of including \(\hat{\mathbf{s}}_{\lambda}(\mathbf{z})\), which is fixed when training \(\boldsymbol{\beta}_{\lambda}^{p}(\mathbf{z})\), is twofold. First, it guarantees the coefficients from the polished network are nonzero in the same positions as the original network, i.e., the same features are selected. Second, it ensures explanatory features only contribute to gradients for samples in which they are active, i.e., \(x_{ij}\) does not contribute if the \(j\)th component of \(\hat{\mathbf{s}}_{\lambda}(\mathbf{z}_{i})\) is zero. Because the polished network does not project onto an \(\ell_{1}\)-ball, its coefficients are not shrunk.
To arrive at the relaxed contextual lasso fit, we convexly combine \(\hat{\boldsymbol{\beta}}_{\lambda}(\mathbf{z})\) and the fitted polished network \(\hat{\boldsymbol{\beta}}_{\lambda}^{p}(\mathbf{z})\):
\[\hat{\boldsymbol{\beta}}_{\lambda,\gamma}(\mathbf{z}):=(1-\gamma)\hat{ \boldsymbol{\beta}}_{\lambda}(\mathbf{z})+\gamma\hat{\boldsymbol{\beta}}_{ \lambda}^{p}(\mathbf{z}),\quad 0\leq\gamma\leq 1. \tag{6}\]
When \(\gamma=0\), we recover the original biased coefficients, and when \(\gamma=1\), we attain the unbiased polished coefficients. Between these extremes lies a continuum of relaxed coefficients with varying degrees of bias. Since the original and polished networks need only be computed once, we may consider any relaxation on this continuum at virtually no additional computational expense. In practice, we choose among the possibilities by tuning \(\gamma\) on a validation set.
### Package
We implement the contextual lasso and its optimization strategy as described in this section in the Julia(Bezanson et al., 2017) package ContextualLasso. For training the neural network, we use the deep learning library Flux(Innes et al., 2018). Though the experiments throughout this paper involve square or logistic loss functions, our package supports _any_ differentiable loss function, e.g., those used throughout the entire family of generalized linear models (Nelder and Wedderburn, 1972). ContextualLasso will be available open source on GitHub shortly.
## 3 Related Work
Contextual explanation networks (Al-Shedivat et al., 2020) can be a considered a cousin of the contextual lasso. These neural networks input contextual features and output an interpretable model in explanatory features. They include non-sparse contextual linear models. Al-Shedivat et al. (2020) implement the contextual linear model as a weighted combination of finitely many individual linear models. Though sparsity is not the focus of their work, they add a small amount of \(\ell_{1}\)-regularization to the individual models to prevent overfitting. This type of regularization is fundamentally different from that studied here since no mechanism encourages the network to combine these sparse models such that the combined model remains sparse. In contrast, the contextual lasso guides the network towards sparse models by directly regularizing the sparsity of the models it produces.
The contextual lasso is also related to several estimators that allow for sparsity patterns that vary by sample. Yamada et al. (2017) devised the first of these estimators--the localized lasso--which fits a linear model with a different coefficient vector for each sample. The coefficients are sparsified using
a lasso regularizer that relies on the availability of graph information to link the samples. Yang et al. (2022) and Yoshikawa & Iwata (2022) followed with neural networks that produce linear models with varying sparsity patterns via gating mechanisms. These approaches are quite distinct from our own, however. First, the sparsity patterns vary with every feature, severely restricting the interpretability of the output linear models. Second, the sparsity level or nonzero coefficients are fixed across samples, making them unsuitable for the contextual setting where both may vary.
More broadly, our work advances the literature at the intersection of feature sparsity and neural networks, an area that has gained momentum over the last few years. See, e.g., the lassonet of Lemhadri et al. (2021a;b) which selects features in a residual neural network using an \(\ell_{1}\)-regularizer on the skip connection. This regularizer is combined with constraints that force a feature's weights on the first hidden layer to zero whenever its skip connection is zero. See also Scardapane et al. (2017) and Feng & Simon (2019) for earlier ideas based on the group lasso, and Chen et al. (2021) for other approaches. Though related, these methods differ from the contextual lasso in that they involve uninterpretable neural networks with fixed sparsity patterns. The underlying optimization problems also differ--whereas these methods regularize the network's weights, ours regularizes its output.
## 4 Experimental Analysis
The properties of the contextual lasso are evaluated here via experimentation on synthetic data. As benchmark methods, we consider a nonsparse contextual linear model (i.e., no projection layer) and a deep neural network as a function of all contextual and explanatory features. We also compare against the lasso and a pairwise lasso with main effects plus pairwise interactions between the explanatory and contextual features. Appendix B contains implementation details.
### Data Generation
The explanatory features \(\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\) are generated iid as \(p\)-dimensional \(N(\mathbf{0},\boldsymbol{\Sigma})\) random variables, where the covariance matrix \(\boldsymbol{\Sigma}\) has elements \(\Sigma_{ij}=0.5^{[i-j]}\). The contextual features \(\mathbf{z}_{1},\ldots,\mathbf{z}_{n}\) are generated iid as \(m\)-dimensional random variables uniform on \([-1,1]^{m}\), independent of the \(\mathbf{x}_{i}\). With the features drawn, we simulate a regression response:
\[y_{i}\sim N(\mu_{i},1),\quad\mu_{i}=\kappa\cdot\mathbf{x}_{i}^{\top} \boldsymbol{\beta}(\mathbf{z}_{i}),\]
or a classification response via a logistic function:
\[y_{i}\sim\mathrm{Bernoulli}(p_{i}),\quad p_{i}=\frac{1}{1+\exp\left(-\kappa \cdot\mathbf{x}_{i}^{\top}\boldsymbol{\beta}(\mathbf{z}_{i})\right)},\]
for \(i=1,\ldots,n\). Here, \(\kappa>0\) is a parameter controlling the signal strength vis-a-vis the variance of \(\kappa\cdot\mathbf{x}_{i}^{\top}\boldsymbol{\beta}(\mathbf{z}_{i})\). We first estimate the variance of \(\mathbf{x}_{i}^{\top}\boldsymbol{\beta}(\mathbf{z}_{i})\) on the training set and then set \(\kappa\) so the variance of the signal is five. The coefficient function \(\boldsymbol{\beta}(\mathbf{z}_{i}):=\left(\beta_{1}(\mathbf{z}_{i}),\ldots, \beta_{p}(\mathbf{z}_{i})\right)^{\top}\) is constructed such that \(\beta_{j}(\mathbf{z}_{i})\) maps to a nonzero value whenever \(\mathbf{z}_{i}\) lies within a hypersphere of radius \(r_{j}\) centered at \(\mathbf{c}_{j}\):
\[\beta_{j}(\mathbf{z}_{i})=\begin{cases}1-\frac{1}{2r_{j}}\|\mathbf{z}_{i}- \mathbf{c}_{j}\|_{2}&\text{if }\|\mathbf{z}_{i}-\mathbf{c}_{j}\|_{2}\leq r_{j}\\ 0&\text{otherwise}\end{cases}. \tag{7}\]
This function attains the maximal value one when \(\mathbf{z}_{i}=\mathbf{c}_{j}\) and the minimal value zero when \(\|\mathbf{z}_{i}-\mathbf{c}_{j}\|_{2}>r_{j}\). The centers \(\mathbf{c}_{1},\ldots,\mathbf{c}_{p}\) are generated with uniform probability on \([-1,1]^{p}\), and the radii \(r_{1},\ldots,r_{p}\) are chosen to achieve sparsity levels that vary between 0.05 and 0.15 (average 0.10 across all features). Figure 3 provides a visual illustration.
This coefficient function is inspired by the house pricing example in Figure 1, where the coefficients are typically nonzero in one central region of the contextual feature space.
### Statistical Performance
As a prediction metric, we report the square or logistic loss relative to the intercept-only model. As an interpretability metric, we report the proportion of nonzero features. As a selection metric, we report the F1-score of the selected features; a value of one indicates perfect feature selection (all true nonzeros selected and no false positives).3 All three metrics are evaluated on a testing set with hyperparameters tuned on a validation set, both constructed by drawing \(n\) samples independently and identically to the training set.
Footnote 3: The \(\mathrm{F1}\)-score \(:=2\,\mathrm{TP}\,/(2\,\mathrm{TP}+\mathrm{FP}+\mathrm{FN})\), where \(\mathrm{TP}\), \(\mathrm{FP}\), and \(\mathrm{FN}\) are the number of true positive, false positive, and false negative selections.
We consider three different settings of increasing complexity: (1) \(p=10\) and \(m=2\), (2) \(p=50\) and \(m=2\), and (3) \(p=50\) and \(m=5\). Within each setting, the sample size ranges from \(n=10^{2}\) to \(n=10^{5}\). Figure 4 reports regression results for these settings over 10 independent replications. Appendix C reports the classification results.
Due to its regularizer, the contextual lasso performs comparably with the lasso, pairwise lasso, and deep neural network
Figure 3: Illustration of coefficient function (7) for \(p=3\) explanatory features and \(m=2\) contextual features. The centers \(\mathbf{c}_{j}\) correspond to the dark red in the middle of each sphere.
when the sample size is small. On the other hand, the contextual linear model (the contextual lasso's unregularized counterpart) can perform poorly here. As \(n\) increases, the contextual lasso begins to outperform other methods in prediction, interpretability, and selection. Eventually, it learns the correct map from contextual features to relevant explanatory features, recovering only the true nonzeros. Though its unregularized counterpart performs nearly as well in terms of prediction for large \(n\), it remains much less interpretable, using all explanatory features. In contrast, the contextual lasso uses just 10% of the explanatory features on average.
Unlike the contextual lasso, the deep neural network performs about as well as the regular lasso for most \(n\). Only for large sample sizes does it begin to approach the prediction performance of the contextual lasso. The two methods should predict equally well for large enough \(n\), though the function learned by the deep neural network remains opaque. The regular lasso makes small gains with increasing sample size. Adding pairwise interactions between the explanatory and contextual features yields a modest improvement to prediction accuracy. Nonetheless, the lasso lacks the expressive power of the contextual lasso needed to adapt to the complex sparsity pattern underlying the true model.
## 5 Data Analyses
The contextual lasso is now applied to model two real datasets. Appendix D has links to the datasets.
### Energy Consumption Regression Dataset
The first dataset contains measurements of energy use over five months for a low-energy home in Mons, Belgium (Candaredo et al., 2017). Besides this continuous response feature, the dataset also contains \(p=27\) explanatory features in the form of temperature and humidity readings in different rooms of the house and local weather data. We define several contextual features from the time stamp to capture seasonality: month of year, day of week, hour of day, and an indicator for the weekend. To reflect their cyclical nature, the first three contextual features are transformed using sine and cosine functions, leading to \(m=7\) contextual features.
The dataset, containing \(n=19,375\) samples, is randomly split into training, validation, and testing sets in 0.6-0.2-0.2 proportions. We repeat this random split of the data 10 times, each time recording performance on the testing set, and report the aggregate results in Table 1. As performance metrics, we consider the relative loss as in Section 4 and the mean number of nonzero features. Among all methods, the contextual lasso leads to the lowest test loss, outperforming even the deep neural network. Importantly, this excellent prediction performance is achieved while maintaining a high level of interpretability. In contrast to the deep neural network and the contextual linear model, which use all available explanatory features, the predictions from the contextual lasso arise from linear models containing just 2.4 explanatory features on average! These linear models are
Figure 4: Comparisons of methods for regression over 10 synthetic datasets. Solid points represent averages and error bars denote standard errors. Dashed horizontal lines in the middle row of plots indicate the true sparsity level. Since relative loss for the contextual linear model can be large for small \(n\), we omit it from the plots in some cases to maintain the aspect ratio.
also much simpler than those from the regular lasso, which typically involve more than four times as many features.
The good predictive performance of the contextual lasso suggests a seasonal pattern of sparsity. To investigate this phenomenon, we apply the fitted model to a randomly sampled testing set and plot the resulting sparsity levels as a function of the hour of day in Figure 5.
The model is typically highly sparse in the late evening and early morning. Between 11 pm and 5 am, the median sparsity level is no more than 5%. There is likely little or no activity inside the house at these times, so sensor readings from within the house--which constitute the majority of the explanatory features--are irrelevant. The number of active explanatory features rises later in the day, reaching a peak at around 6 pm. Overall, a major benefit of the contextual lasso, besides its good predictions, is the ability to identify a parsimonious set of factors driving energy use at any given time of day.
### News Popularity Classification Dataset
The second dataset consists of articles posted to the news platform Mashable (Fernandes et al., 2015). The task is to predict if an article will be popular, defined in Fernandes et al. (2015) as more than 1400 shares. In addition to the zero-one response feature for popularity, the dataset has predictive features that quantify the articles (e.g., number of total words, positive words, and images). The data channel feature, which identifies the category of the article (lifestyle, entertainment, business, social media, technology, world, or viral), is taken as the contextual feature. It is expressed as a sequence of indicator variables yielding \(m=6\) contextual features. There remain \(p=51\) explanatory features.
Table 2 reports the results over 10 random splits of the dataset (\(n=39,643\)) into training, validation, and testing sets in the same proportions as before.
In contrast to the previous dataset, all the methods predict similarly well here. The deep neural network performs marginally best overall, while the lasso performs marginally worst. Though predicting neither best nor worst, the contextual lasso retains a significant lead in terms of sparsity, being twice as sparse as the next best method. Sparsity is crucial for this task as it allows the author to focus on a small number of changes necessary to improve the article's likelihood of success. The uninterpretable deep neural network, or fully dense contextual linear model, are not nearly as useful here.
## 6 Concluding Remarks
Contextual sparsity is an important extension of the classical notion of feature sparsity. Rather than fix the relevant features once and for all, contextual sparsity allows feature relevance to depend on the prediction context. To tackle this intricate statistical learning problem, we devise the contextual lasso. This new estimator utilizes the expressive power of deep neural networks to learn interpretable sparse linear models with sparsity patterns that vary with the contextual features. The optimization problem of the contextual lasso is solvable at scale using modern deep learning frameworks, and we make our implementation open source. An extensive experimental analysis of the new estimator illustrates its good prediction, interpretation, and selection properties. To the best of our knowledge, the contextual lasso is the only available tool for handling the contextually sparse setting.
One direction that continues this line of work is to combine contextual sparsity with classic feature sparsity. This blended sparsity would be suitable for removing explanatory features that are not predictive in any context. Another direction is to extend the notion of contextual sparsity beyond the lasso to other sparsity-inducing regularizers. Other regularizers can, however, give rise to computational difficulties beyond those considered here, e.g., points of discontinuity.
\begin{table}
\begin{tabular}{l l l} \hline \hline & Relative loss & Mean nonzeros \\ \hline Deep neural network & \(0.406\pm 0.004\) & - \\ Contextual linear model & \(0.353\pm 0.006\) & \(25.0\pm 0.0\) \\ Lasso & \(0.694\pm 0.004\) & \(10.8\pm 0.4\) \\ Pairwise lasso & \(0.589\pm 0.003\) & \(24.5\pm 0.2\) \\ Contextual lasso & \(0.330\pm 0.005\) & \(2.4\pm 0.2\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparisons of methods for modeling energy consumption. Metrics are aggregated over 10 random splits of the data. Averages and standard errors are reported.
\begin{table}
\begin{tabular}{l l l} \hline \hline & Relative loss & Mean nonzeros \\ \hline Deep neural network & \(0.899\pm 0.002\) & - \\ Contextual linear model & \(0.908\pm 0.003\) & \(51.0\pm 0.0\) \\ Lasso & \(0.916\pm 0.002\) & \(21.2\pm 0.9\) \\ Pairwise lasso & \(0.910\pm 0.002\) & \(29.3\pm 0.5\) \\ Contextual lasso & \(0.910\pm 0.002\) & \(9.4\pm 0.6\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparisons of methods for modeling news popularity. Metrics are aggregated over 10 random splits of the data. Averages and standard errors are reported.
Figure 5: Explanatory feature sparsity as a function of hour of day for the estimated energy consumption model. The sparsity level varies within each hour because the other contextual features vary. |
2304.08172 | Pointwise convergence of Fourier series and deep neural network for the
indicator function of d-dimensional ball | In this paper, we clarify the crucial difference between a deep neural
network and the Fourier series. For the multiple Fourier series of
periodization of some radial functions on $\mathbb{R}^d$, Kuratsubo (2010)
investigated the behavior of the spherical partial sum and discovered the third
phenomenon other than the well-known Gibbs-Wilbraham and Pinsky phenomena. In
particular, the third one exhibits prevention of pointwise convergence. In
contrast to it, we give a specific deep neural network and prove pointwise
convergence. | Ryota Kawasumi, Tsuyoshi Yoneda | 2023-04-17T11:38:22Z | http://arxiv.org/abs/2304.08172v5 | # Pointwise convergence theorem of generalized mini-batch gradient descent in deep neural network
###### Abstract.
The theoretical structure of deep neural network (DNN) has been clarified gradually. Imaizumi-Fukumizu (2019) and Suzuki (2019) clarified that the learning ability of DNN is superior to the previous theories when the target function is non-smooth functions. However, as far as the author is aware, none of the numerous works to date attempted to mathematically investigate what kind of DNN architectures really induce pointwise convergence of gradient descent (without any statistical argument), and this attempt seems to be closer to the practical DNNs. In this paper we restrict target functions to non-smooth indicator functions, and construct a deep neural network inducing pointwise convergence provided by mini-batch gradient descent process in ReLU-DNN.
Key words and phrases:deep neural network, ReLU function, gradient descent, pointwise convergence 2020 Mathematics Subject Classification: Primary 68T27; Secondary 68T07, Tertiary 41A29
## 1. Introduction
Recently, deep leaning has been the successful tool for various tasks of data analysis (see [10, 9, 18, 21] for example). Also, the theoretical structure of deep neural network (DNN) has been clarified gradually. In particular, Amari [1] gave a simple observation showing that any target function is in a sufficiently small neighborhood of any randomly connected DNN, with sufficiently large number of neurons in a layer (see also Kawaguchi-Huang-Kaelbling [14] and references therein). Keeping these celebrated results in mind, our next work would be clarifying the precise convergence structure of DNN even if initial data are already close to the target function. Imaizumi-Fukumizu [11] examined learning of non-smooth functions, which was not covered by the previous theory, and clarified that, compare the DNN with the previous theories (such as the kernel methods), the convergence rates are almost optimal for non-smooth functions, while some of the popular models do not attain this optimal rate. Suzuki [23] (also references therein) clarified that the learning ability of ReLU-DNN is superior to the linear method when the target function is in the supercritical Besov spaces \(B_{p,q}^{s}\) with \(p<2\) and \(s<d/2\) (\(d\) is the dimension, note that the case \(s=d/2\) is called "critical"), which indicates the spatial inhomogeneity of the shape of the target function including the non-smooth functions. Thus, with the aid of these results, we can conclude that ReLU-DNN is suitable for recognizing jump discontinuity of the non-smooth functions.
We now briefly explain the key idea of [23]. To show the approximation error theorems, he first applied the wavelet expansion to the target functions and then
approximated each wavelet bases (composed of spline functions) by ReLU-DNN (see [28]). More specifically, let \(g:[0,1]\to[0,1]\) be a tent function such that
\[g(x)=\begin{cases}2x\quad(x<1/2),\\ 2(1-x)\quad(x\geq 1/2)\end{cases}\]
and let \(g_{s}\) be a \(s\) times composite function, and \(f_{m}\) be a function approximating to the second order polynomial such that
\[g_{s}(x)=\underbrace{g\circ g\circ\cdots\circ g}_{s}(x)\quad\text{and}\quad f _{m}(x)=x-\sum_{s=1}^{m}\frac{g_{s}(x)}{2^{2s}}.\]
Note that \(f_{m}(x)\to x^{2}\) (\(m\to\infty\)) uniformly. For deriving the multi-dimensional polynomials, it suffices to apply the following formula:
\[xy=\frac{1}{2}((x+y)^{2}-x^{2}-y^{2}),\]
and then we can easily approximate multi-dimensional spline functions by ReLU-DNN. The other idea (for variance) is applying the statistical argument in [22] combined with a covering number evaluation.
However, as far as the author is aware, none of the numerous works to date attempted to mathematically investigate what kind of DNN architectures really induce pointwise convergence of gradient descent (without any statistical argument) even if the initial data are already close to the target function, and this attempt seems to be closer to the practical DNNs. In what follows, we investigate this problem.
Before going any further, we point out that employing supercritical function spaces may not be enough to capture the discontinuity structure of the target functions. This means that we may need to directly analyze each DNNs, if the target function is bounded and discontinuous (i.e. in a critical function space). The flavor of this insight seems similar to the recent mathematical studies on the incompressible inviscid flows. See [2, 3, 6, 7, 19] for example. More precisely, these have been directly looking into the behavior of inviscid fluids in the critical function spaces (to show ill-posedness), and the argument seems quite different from the previous studies focusing on well-posedness in subcritical type of function spaces. See [4, 12, 13, 20, 27] for example. To show the well-posedness, the structure of function spaces, more precisely, commutator estimates are crucially indispensable.
This paper is organized as follows: In the next section, we construct target functions and the corresponding estimators. In Section 3, we investigate the pointwise convergence of gradient descent in terms of ReLU-DNN if initial data are already close to the target function. In the last section, we give the key lemma and its proof.
## 2. Target functions and the corresponding estimators
In this section, we define a set of target functions and the corresponding estimators, which is one of the typical function set assuring pointwise convergence. For \((y_{j},\tau_{j})\in[-1,1)^{d}\times\mathbb{S}^{d-1}\) (\(j=1,2,\cdots\)), let us define half spaces \(H^{\circ}\) and \(H^{\circ}_{\epsilon}\) as follows:
\[H^{\circ}(y_{j},\tau_{j}) :=\{x\in[-1,1)^{d}:x\cdot\tau_{j}-y_{j}\cdot\tau_{j}<0\},\] \[H^{\circ}_{\epsilon}(y_{j},\tau_{j}) :=\{x\in[-1,1)^{d}:x\cdot\tau_{j}-y_{j}\cdot\tau_{j}<-\epsilon\}.\]
We employ a set of indicator functions \(\{\chi_{\Omega},\Omega\in\mathcal{M}\}\) as the set of target functions, where \(\mathcal{M}\) is a set of convex smooth manifolds (with internal filling) as follows:
\[\mathcal{M}:= \bigg{\{}\Omega\subset[-1,1)^{d}:\partial\Omega\text{ is smooth, and the following three conditions hold:}\] \[\text{There exists }\{(y_{j},\tau_{j})\}_{j=1}^{\infty}\subset \partial\Omega\times\mathbb{S}^{d-1}\text{ such that }\bigcap_{j=1}^{\infty}H^{\circ}(y_{j},\tau_{j})=\Omega.\] \[\text{For each }j\text{ and any }N\in\mathbb{N},\] \[\max_{j^{\prime}\neq j,\ 1\leq j^{\prime}\leq N}\text{dist }\left(y_{j}-N^{-\frac{2}{d-1}}\tau_{j},\partial H^{\circ}_{N^{-\frac{2}{d-1} }}(y_{j^{\prime}},\tau_{j^{\prime}})\right)\lesssim N^{-\frac{1}{d-1}}.\] \[\text{For each }j,\text{ there is a set of points}\] \[\{c_{ji}\}_{i=1}^{d}\subset(\Omega\cap\partial H^{\circ}_{N^{-\frac{2}{d- 1}}}(y_{j},\tau_{j}))\text{ which are linearly independent.}\bigg{\}}\]
The first condition is nothing more than expressing the convexity. The second one is needed for the estimate of the difference between the target function and the corresponding estimator (see (2)). Note that, we choose \(\bigcap_{j^{\prime}=1}^{N}\partial H^{\circ}_{e}(y_{j^{\prime}},\tau_{j^{ \prime}})\) as a regular polytope, and by the dimensional analysis, the power \(-\frac{1}{d-1}\) naturally appears. The points \(\{c_{ji}\}_{i=1}^{d}\) in the third one is needed for the construction of training samples for mini-batch (see the next section).
**Remark 1**.: An interesting question naturally arises: for \(\Omega\in\mathcal{M}\), whether or not \(\partial\Omega\) is a manifold isometric to the sphere. We leave it as an open question (c.f. Tsukamoto [25, 26]).
**Definition 1**.: (Definition of estimator.) For a target function \(f^{\circ}=\chi_{\Omega}\ (\Omega\in\mathcal{M})\), we define the corresponding estimator \(f^{\circ}_{N}\) as follows:
\[f^{\circ}_{N}:=\chi_{\Omega^{\circ}_{N}},\quad\Omega^{\circ}_{N}:=\bigcup_{j=1 }^{N}H^{\circ}_{N^{-\frac{2}{d-1}}}(y_{j},\tau_{j}).\]
**Lemma 1**.: _We have_
\[f^{\circ}_{N}(x)\to f^{\circ}(x)\quad(N\to\infty)\quad\text{for any}\quad x\in[- 1,1)^{d}. \tag{1}\]
_Moreover we have the following convergence rate:_
\[\|f^{\circ}_{N}-f^{\circ}\|_{L^{r}}^{r}\lesssim_{d}N^{-\frac{2}{d-1}}\quad \text{for}\quad 1\leq r<\infty. \tag{2}\]
Proof.: By applying a diagonal argument, we immediately have (1). To show (2), let us choose a set of \(\{\tau_{ji}^{\perp}\}_{i=1}^{d-1}\subset\mathbb{S}^{d-1}\) satisfying \(\tau_{ji}^{\perp}\cdot\tau_{j}=0\ (i=1,2,\cdots,d-1)\) and \(\tau_{ji}^{\perp}\cdot\tau_{ji^{\prime}}^{\perp}\ (i\neq i^{\prime})\). Then by using a standard local coordinate system, we have
\[\left\{y_{j}-N^{-\frac{2}{d-1}}\tau_{j}+\sum_{i=1}^{d-1}s_{i}\tau_{ji}^{\perp}: s_{i}\in\mathbb{R},\ |s|\lesssim N^{-\frac{1}{d-1}}\right\}\subset\partial H^{\circ}_{N^{-\frac{2}{d- 1}}}(y_{j},\tau_{j})\]
and
\[\left\{y_{j}+\sum_{i=1}^{d-1}s_{i}\tau_{ji}^{\perp}+g(s)\tau_{j}:s_{i}\in \mathbb{R},\ |s|\lesssim N^{-\frac{1}{d-1}}\right\}\subset\partial\Omega^{\circ},\]
where \(g(s)=c_{1}s_{1}^{2}+\cdots+c_{d-1}s_{d-1}^{2}+O(|s|^{3})\) for some positive constants \(c_{i}>0\) (independent of \(N\)). Thus we have
\[|\Omega_{N}^{\circ}\setminus\Omega^{\circ}|\lesssim(N^{-\frac{2}{d-1}}+c_{1}s_ {1}^{2}+\cdots c_{d-1}s_{d-1}^{2})|\partial\Omega^{\circ}|\lesssim_{d}N^{- \frac{2}{d-1}}.\]
Therefore
\[\|f_{N}^{\circ}-f^{\circ}\|_{L^{r}}^{r}\lesssim_{d}N^{-\frac{2}{d-1}}\quad \text{for}\quad 1\leq r<\infty.\]
## 3. Pointwise convergence of gradient descent
In what follows we mathematically investigate the pointwise convergence of gradient descent in terms of ReLU-DNN, which seems to be closer to the practical ReLU-DNN. In order to do that, first we formulate the mini-batch gradient descent in pure mathematics. Let \(f^{\circ}\) be a target function and \(\{f_{N}(W^{t})\}_{t=0}^{\infty}\) (\(t\in\mathbb{Z}_{\geq 0}\)) be a sequence of functions generated by the following gradient descent:
\[E(W^{t}):=\frac{1}{2}\int_{\mathcal{D}}|f_{N}(W^{t},x)-f^{\circ}(x)|^{2}dx,\]
\[W^{t+1}=W^{t}-\epsilon\frac{1}{|\mathcal{D}|}\nabla_{W^{t}}E(W^{t}),\]
where \(f_{N}\) is a prescribed neural network with \(N\)-nodes, \(\{W^{t}\}_{t}\) is a set of weight and bias, \(\epsilon\in\mathbb{R}_{>0}\) is a leaning coefficient and \(\mathcal{D}\subset[-1,1)^{d}\) is a set of training samples for mini-batch. Note that, since the gradient is normalized by \(1/|\mathcal{D}|\), \(\mathcal{D}\) can be replaced by a non-zero measure set or a set of lines. Let \(\{f_{N}^{\circ}\}_{N}\) be a sequence of estimators such that
\[f_{N}^{\circ}(x):=\lim_{t\to\infty}f_{N}(W^{t},x).\]
Our specific purpose is to find neural networks \(f_{N}\), suitable \(\mathcal{D}\) and \(\epsilon\) assuring pointwise convergence to the corresponding estimator \(f_{N}^{\circ}\), which is already given in the last section.
**Remark 2**.: This problem setting clarifies the crucial difference between the shallow and deep neural networks, as follows: Since \(\sin\) and \(\cos\) functions are continuous, we can recover them from linear combination of activation functions (see [5] for example). Thus mathematical analysis of shallow neural network can be replaced by a linear combination of \(\sin\) and \(\cos\) functions, which is nothing more than the Fourier series. For \(x\in[-1,1)^{d}\), we set the target function \(f^{\circ}\) as the indicator function of the \(d\) dimensional ball such that
\[f^{\circ}(x)=\begin{cases}1,&|x|\leq 1/2,\\ 0,&|x|>1/2,\end{cases}\]
and let \(f^{N}\) be a Fourier series with spherical partial sum:
\[f_{N}(W^{t},x):=\sum_{|k|<N}c_{k}^{t}e^{ik\cdot x}\in\mathbb{R},\quad W^{t}: =\{c_{k}^{t}\}_{k\in\mathbb{Z}^{d}}\subset\mathbb{C},\ c_{-k}^{t}=\bar{c}_{k}^ {t},\ k\in\mathbb{Z}^{d}.\]
Let \(\mathcal{D}=[-1,1)^{d}\) (\(t=0,1,2\cdots\)), and by the Parseval's identity, we immediately have the following estimator (of course, different from the one given in the last section):
\[f_{N}^{\circ}(x)=\sum_{|k|<N}\tilde{c}_{k}e^{ik\cdot x}\quad\text{for}\quad \tilde{c}_{k}=\int_{[-1,1)^{d}}f^{\circ}(x)e^{-ik\cdot x}dx.\]
Then we obtain the following counterexample, which clarifies the crucial difference between the shallow and deep neural networks.
Counterexample.Let \(d\geq 5\) and \(\mathcal{D}=[-1,1)^{d}\). Then, for any \(x\in\mathbb{Q}^{d}\cap[-1,1)^{d}\),
\[f_{N}^{\circ}(x)-f^{\circ}(x)\quad\text{diverges as}\quad N\to\infty.\]
The proof is just direct consequence of Kuratsubo [15] (see also [16, 17]). Thus we omit its detail.
In contrast with the Fourier series case (shallow neural network), we will show pointwise convergence to \(f_{N}^{\circ}\) which is already given in the last section. Let \(N=2^{n}\) (\(n\in\mathbb{N}\)) and let us now construct a deep neural network \(f_{N}\). For the initial layer, we define
\[z^{1}:=h(w^{1}x+b^{1}):=\begin{pmatrix}h(w_{1}^{1}\cdot x+b_{1}^{1})\\ \vdots\\ h(w_{2^{n}}^{1}\cdot x+b_{2^{n}}^{1})\end{pmatrix}\]
for \(x\in[-1,1)^{d}\), \(w^{1}:=\{w_{j}^{1}\}_{j=1}^{2^{n}}:=\{w_{ji}^{1}\}_{ji}\in\mathbb{R}^{2^{n} \times d}\), \(b^{1},z^{1}\in\mathbb{R}^{2^{n}}\). Recall that \(w\) is the weight and \(b\) is the bias. For the \(2k\)-th layer, we set
\[z^{2k}:=h(w^{2k}z^{2k-1}+b^{2k})\]
for \(w^{2k}\in\mathbb{R}^{3\cdot 2^{n-k}\times 2^{n-k+1}}\), \(b^{2k},z^{2k}\in\mathbb{R}^{3\cdot 2^{n-k}}\). Moreover, we impose the following sparsity condition: for \(J=1,2,\cdots,2^{n-k}\) and \(1\leq k\leq n\),
\[\begin{split} z_{3J-2}^{2k}&=h(w_{3J-2,2J-1}^{2k}z_{ 2J-1}^{2k-1}+w_{3J-2,2J}^{2k}z_{2J}^{2k-1}),\\ z_{3J-1}^{2k}&=h(w_{3J-1,2J-1}^{2k}z_{2J-1}^{2k-1}+w_{3J-1,2J}^ {2k}z_{2J}^{2k-1}),\\ z_{3J}^{2k}&=h(w_{3J,2J-1}^{2k}z_{2J-1}^{2k-1}+w_{3J,2J}^ {2k}z_{2J}^{2k-1}),\end{split} \tag{3}\]
where \(z^{2k}=\{z_{j}^{2k}\}_{j}\), \(b^{2k}=\{b_{j}^{2k}\}_{j}\), and \(w^{2k}=\{w_{ji}^{2k}\}_{ji}\), and also impose the following restriction:
\[\begin{split} w_{3J-2,2J-1}^{2k}&=w_{3J-1,2J-1}^{2k} =-w_{3J,2J-1}^{2k},\\ w_{3J-2,2J}^{2k}&=-w_{3J-1,2J}^{2k}=w_{3J,2J}^{2k}. \end{split} \tag{4}\]
For the \(2k+1\) layer, we set
\[z^{2k+1}=w^{2k+1}z^{2k},\]
\(w^{2k+1}\in\mathbb{R}^{2^{n-k}\times 3\cdot 2^{n-k}}\), \(z^{2k+1}\in\mathbb{R}^{2^{n-k}}\). In this layer, we impose the following restriction: for \(J=1,2,\cdots 2^{n-k}\),
\[z_{J}^{2k+1}=z_{3J-2}^{2k}-z_{3J-1}^{2k}-z_{3J}^{2k}.\]
Then we see that, in the \(2n+1\) layer, \(z^{2n+1}\) becomes a real number. In the final layer, we apply the following clipping:
\[f_{N}:=z^{2n+2} =\max\{z^{2n+1},1\}\] \[=1-h(1-z^{2n+1}).\]
**Remark 3**.: In this paper we employed ReLU function as the activate function, for simplicity. Of course, employing the sigmoid function case is also attractive problem.
Then the main theorem is as follows:
**Theorem 2**.: _Assume that the initial function \(f_{N}(W^{t=0})\) is already close to the target function \(f^{\circ}\), namely, \(W^{t=0}\) satisfies the initial conditions (8) and (9). Let \(\epsilon=\gamma^{2}\) (\(\gamma\) is given in Proposition 9). Then, by choosing \(\mathcal{D}\) appropriately, and by suitable change of variables: \(W^{t}\mapsto(\alpha^{t},\beta^{t})\), then \(f_{N}(\alpha^{t},\beta^{t})\) converges to \(f^{\circ}_{N}\) pointwisely (as \(t\to\infty\)). The change of variables are explicitly written as_
\[\alpha_{j}:=m_{j}^{2}|w_{j}^{1}|^{2}\quad\text{and}\quad\beta_{j}=m_{j}(w_{j}^ {1}\cdot c_{ji}+b_{j}^{1}),\]
_where the definition of \(m_{j}\) is given in (5). Moreover we have the following convergence rate:_
\[\|f_{N}(\alpha^{t},\beta^{t})-f^{\circ}_{N}\|_{L^{r}}^{r}\lesssim_{d}t^{-1/3} \quad\text{for}\quad 1\leq r<\infty.\]
**Remark 4**.: It is an open question whether or not the original coefficient \(W^{t}\) case is also converging to the same estimator \(f^{\circ}_{N}\).
**Remark 5**.: The initial conditions (8) and (9) are just for the technical reason. We can relax these conditions further.
_Proof._ First we consider a pair of \(2k-1\), \(2k\) and \(2k+1\) layers. Let us rewrite (3) in the simpler description as follows:
\[\begin{cases}z_{3J-2}^{2k}=h(m^{1}z_{2J-1}^{2k-1}+m^{0}z_{2J}^{2k-1}),\\ z_{3J-1}^{2k}=h(m^{1}z_{2J-1}^{2k-1}-m^{0}z_{2J}^{2k-1}),\\ z_{3J}^{2k}=h(-m^{1}z_{2J-1}^{2k-1}+m^{0}z_{2J}^{2k-1}),\end{cases}\]
where
\[m_{k,J}^{1}=m^{1} :=w_{3J-2,2J-1}^{2k}=w_{3J-1,2J-1}^{2k}=-w_{3J,2J-1}^{2k},\] \[m_{k,J}^{0}=m^{0} :=w_{3J-2,2J}^{2k}=-w_{3J-1,2J}^{2k}=w_{3J,2J}^{2k}.\]
Recall that
\[z_{J}^{2k+1}=z_{3J-2}^{2k}-z_{3J-1}^{2k}-z_{3J}^{2k}.\]
Taking a derivative, we have
\[\partial_{z_{2J-1}^{2k-1}}z_{J}^{2k+1}= m^{1}\partial h(m^{1}z_{2J-1}^{2k-1}+m^{0}z_{2J}^{2k-1})\] \[-m^{1}\partial h(m^{1}z_{2J-1}^{2k-1}-m^{0}z_{2J}^{2k-1})\] \[+m^{1}\partial h(-m^{1}z_{2J-1}^{2k-1}+m^{0}z_{2J}^{2k-1}).\]
Due to the cancellation of Heaviside functions in the following domain,
\[D_{k,J}^{0}:=\left\{x:m^{0}z_{2J}^{2k-1}<m^{1}z_{2J-1}^{2k-1}\right\},\]
we have
\[\partial_{z_{2J-1}^{2k-1}}z_{J}^{2k+1}=0\quad\text{for}\quad x\in D_{k,J}^{0}.\]
Note that, rigorously saying, \(z^{2k-1}:=z^{2k-1}\circ z^{2k}\circ\cdots\circ z^{1}\). To the contrary, there is no cancellation of Heaviside functions in the following domain:
\[D_{k,J}^{1}:=\{x:m^{0}z_{2J}^{2k-1}>m^{1}z_{2J-1}^{2k-1}\}.\]
In other words,
\[\partial_{z_{2J-1}^{2k-1}}z_{J}^{2k+1}=2m^{1}\quad\text{for}\quad x\in D_{k,J} ^{1}.\]
The same argument goes through also in the case \(\partial_{z^{2k-1}_{2J}}z^{2k+1}_{J}\) (omit its detail). In this case, we have
\[\partial_{z^{2k-1}_{2J}}z^{2k+1}_{J} =2m^{0}\quad\text{for}\quad x\in D^{0}_{k,J},\] \[\partial_{z^{2k-1}_{2J}}z^{2k+1}_{J} =0\quad\text{for}\quad x\in D^{1}_{k,J}.\]
We apply this property inductively in the reverse direction (as the back propergation), and we divide the non-zero region \(\{x:f_{N}(W^{t},x)>0\}\) into several parts appropriately. To do that, we suitably rewrite the natural number \(j\in\{1,2,\cdots,2^{n}\}\) as follows:
\[j=\delta^{j}_{1}+2\delta^{j}_{2}+2^{2}\delta^{j}_{3}+\cdots+2^{n-1}\delta^{j} _{n},\]
where \(\delta^{j}_{k}\in\{0,1\}\). Let
\[D_{j}:=\bigcap_{k=1}^{n}D^{\delta^{j}_{k}}_{k,J^{j}_{k}}\quad\text{for}\quad J ^{j}_{k}:=\sum_{\ell=k}^{n}2^{\ell-k}\delta^{j}_{\ell}.\]
By using this \(D_{j}\), the derivative formula becomes much simpler:
\[\partial_{x}z^{2n+1}(x)=m_{j}w^{1}_{j}\quad\text{for}\quad x\in D_{j},\quad \text{where}\quad m_{j}:=\prod_{k=1}^{n}\left(2m^{\delta^{j}_{k}}_{k,J^{j}_{k} }\right). \tag{5}\]
By the construction of \(D_{j}\), we observe that
\[\{x:f_{N}=0\}\cap\partial D_{j}\subset\{x:w^{1}_{j}\cdot x+b^{1}_{j}=0\},\]
then, by the fundamental theorem of calculus, we have
\[z^{2n+1}(x)=\sum_{j=1}^{2^{n}}\left(h(m_{j}w^{1}_{j}\cdot x+m_{j}b^{1}_{j}) \chi_{D_{j}}(x)\right).\]
Therefore we obtain the following explicit formula:
\[f_{N}(x)=\max\left\{\sum_{j=1}^{2^{n}}\left(h(\tilde{w}^{1}_{j}\cdot x+\tilde{ b}^{1}_{j})\chi_{D_{j}}(x)\right),1\right\}, \tag{6}\]
where \(\tilde{w}^{1}_{j}:=m_{j}w^{1}_{j}\) and \(\tilde{b}^{1}_{j}:=m_{j}b^{1}_{j}\). Then we can apply Lemma 4 in the next section, and complete the proof.
## 4. key lemma for pointwise convergence
In this section we give several assumptions and a geometric a-priori region, just for providing much simpler argument. First let us assume
\[\tilde{w}^{t=0}_{j}\perp\partial H^{\circ}_{N^{-\frac{2}{d-1}}}(y_{j},\tau_{j }). \tag{7}\]
To give the geometric a-priori region, we use parametrized hyper planes. For \(r=\{r_{ji}\}_{ji}\in(-1,1)\) (\(i=1,\cdots,d\), \(j=1,\cdots,2^{n}\)), let \(h_{j}(r)\) be a unique hyper plane composed of points which are also linearly independent (due to the assumption (7)):
\[\{\tilde{c}_{ji}(r)\}_{i=1}^{d}:=\left\{2\frac{\tilde{w}_{j}}{|\tilde{w}_{j}| ^{2}}r_{ji}+c_{ji}\right\}_{i=1}^{d}.\]
To be more precise. By the Gram-Schmidt process, there is \(\tilde{\tau}_{j}\in\mathbb{S}^{d-1}\) such that
\[(\tilde{c}_{ji}(r)-\tilde{c}_{ji^{\prime}}(r))\cdot\tilde{\tau}_{j}=0\quad(i \neq i^{\prime}),\]
and then we define the hyper plane \(h_{j}(s)\) as follows:
\[h_{j}(r):=\{x:(x-\tilde{c}_{j1}(r))\cdot\tilde{\tau}_{j}=0\}.\]
By using this \(h_{j}(s)\), we now define the a-priori region \(\mathcal{L}_{j}\) as follows:
\[\mathcal{L}_{j}:=\bigcup_{r_{j1}\in(-1,1)}\bigcup_{r_{j2}\in(-1,1)}\cdots \bigcup_{r_{jd}\in(-1,1)}h_{j}(r).\]
Before we state the key lemma, we need the following proposition.
**Proposition 3**.: _Assume_
\[\{c_{ji}\}_{i=1}^{d}\subset D_{j}\quad\text{and}\quad\tilde{w}_{j}\perp \partial H^{\circ}_{N^{-\frac{2}{d-1}}}(y_{j},\tau_{j}). \tag{8}\]
_Then there exists \(\gamma>0\) such that if_
\[|\tilde{w}_{j}|>\gamma, \tag{9}\]
_then_
\[\ell_{ji}(s):=\tilde{w}_{j}^{1}s+c_{ji}\in D_{j}\setminus(\cup_{j\neq j^{ \prime}}\mathcal{L}_{j^{\prime}})\quad\text{for}\quad s\in[-2|\tilde{w}_{j}|^ {-2},2|\tilde{w}_{j}|^{-2}]. \tag{10}\]
We need this (10) for providing the simple induction argument (see the proof of Lemma 4). This means that, by a careful computation, we may be able to relax this (10) further.
Proof.: The case \(|\tilde{w}_{j}|=\infty\) automatically satisfies (10) and then we just apply the continuity argument.
**Lemma 4**.: _Assume that the initial weight and bias \(W^{t=0}\) satisfy (8) and (9). Let \(\epsilon=\gamma^{2}\). Then, by choosing \(\mathcal{D}\subset[-1,1)^{d}\) appropriately, and by suitable change of variables: \(W^{t}\mapsto(\alpha^{t},\beta^{t})\), \(f_{N}(\alpha^{t},\beta^{t})\) converges to \(f_{N}^{\circ}\) pointwisely (as \(t\to\infty\)). The change of variables are explicitly written as_
\[\alpha_{j}:=m_{j}^{2}|w_{j}^{1}|^{2}\quad\text{and}\quad\beta_{j}=m_{j}(w_{j} ^{1}\cdot c_{ji}+b_{j}^{1}),\]
_where the definition of \(m_{j}\) is given in (5). Also we have the following convergence rate:_
\[\|f_{N}(\alpha^{t},\beta^{t})-f_{N}^{\circ}\|_{L^{r}}^{r}\lesssim t^{-1/3}.\]
**Remark 6**.: Formally, the coefficients of \(f_{N}^{\circ}\) include infinity. But this is rather reasonable, since we need to express discontinuity by using finite times composite function of the ReLU function.
Proof of Lemma.For \(t\) times gradient descent, we choose \(2^{n}\cdot d\) elements of straight lines passing through \(\{c_{ji}\}_{ji}\), and we denote them \(\ell_{ji}^{t}\) (\(i=1,2,\cdots,d,j=1,\cdots,2^{n}\)):
\[\ell_{ji}^{t}(s)=\tilde{w}_{j}^{1,t}s+c_{ji}^{t}.\]
Rigorously this \(\tilde{w}_{j}^{1,t}\) is frozen. More precisely, if we take derivative in \(w_{j}^{1}\) or \(b_{j}^{1}\), we regard this \(\tilde{w}_{j}^{1,t}\) as a constant, not variable. Let \(\mathcal{D}\) be such that
\[\mathcal{D}:=\bigcup_{k=1}^{d}\bigcup_{j=1}^{2^{n}}\bigcup_{s=-2\gamma^{-2}}^ {2\gamma^{-2}}\ell_{ji}(s).\]
First we show that this \(\mathcal{D}\) is independent of \(t\). We plug this lines into (6), and introduce the new variables \(\alpha_{j}\), \(\beta_{ji}\):
\[(\tilde{w}_{j}^{1}\ell_{ji}(s)+\tilde{b}_{j}^{1}) =|\tilde{w}_{j}^{1}|^{2}s+(\tilde{w}_{j}^{1}\cdot c_{ji}+\tilde{b} _{j}^{1})\] \[=:\alpha_{j}s+\beta_{j}. \tag{11}\]
Note that \(\tilde{w}_{j}^{1}\cdot c_{ji}\) is independent of \(i\). This means that,
\[\text{if}\quad\tilde{w}_{j}^{1,t}\perp\partial H^{\circ}_{N^{-\frac{2}{d-1}}} (y_{j},\tau_{j}),\quad\text{then}\quad\tilde{w}_{j}^{1,t+1}\perp\partial H^{ \circ}_{N^{-\frac{2}{d-1}}}(y_{j},\tau_{j}).\]
Now we rewrite the error function \(E\) as follows:
\[E(\alpha,\beta) :=\frac{d}{2|\mathcal{D}|}\sum_{j=1}^{2^{n}}E_{j}(\alpha_{j}, \beta_{j})\] \[:=\frac{d}{2|\mathcal{D}|}\sum_{j=1}^{2^{n}}\left(\int_{-2\gamma }^{0}(\alpha_{j}s+\beta_{j})^{2}ds+\int_{0}^{2\gamma}(1-(\alpha_{j}s+\beta_{j} ))^{2}ds\right)\] \[=\frac{d}{2|\mathcal{D}|}\sum_{j=1}^{2^{n}}\left(\int_{-\frac{ \beta_{j}}{\alpha_{j}}}^{0}(\alpha_{j}s+\beta_{j})^{2}ds+\int_{0}^{\frac{1- \beta_{j}}{\alpha_{j}}}(1-(\alpha_{j}s+\beta_{j}))^{2}ds\right).\]
Direct calculations yield \(|\mathcal{D}|=4d2^{n}/\gamma^{2}\) and
\[E_{j}(\alpha_{j},\beta_{j})=\frac{1-3\beta_{j}+3\beta_{j}^{2}}{3\alpha_{j}}.\]
Then we have
\[\partial_{\alpha_{i}}E_{j}=-\frac{1}{3\alpha_{j}^{2}}\left(1-3\beta_{j}+3 \beta_{j}^{2}\right)\quad\text{and}\quad\partial_{\beta_{j}}E_{j}=-\frac{1}{ \alpha_{j}}(1-2\beta_{j}).\]
Since \(1-3\beta_{j}+3\beta_{j}^{2}\geq 1/4>0\) for any \(\beta_{j}\in\mathbb{R}\), we always have \(\partial_{\alpha_{j}}E<0\) and
\[\alpha_{j}^{t+1}=\alpha_{j}^{t}-\epsilon\partial_{\alpha_{j}}E\geq\alpha_{j}^ {t}+\frac{\gamma^{2}}{24|\mathcal{D}|(\alpha_{j}^{t})^{2}}.\]
Thus \(\alpha^{t+1}>\alpha^{t}\) and then we have \(|\tilde{w}_{j}^{1,t+1}|>\gamma\) inductively. By directly solving the ODE: \(\frac{d}{dt}g(t)=1/g(t)^{2}\), applying the mean-value theorem and the comparison principle, we have \(\alpha_{j}^{t}\gtrsim t^{1/3}\). Next we consider \(\beta_{j}\). First we show \(0<\beta_{j}<1\). By
\[\partial_{\beta_{j}}E=-\frac{1}{\alpha_{j}^{t}}(1-2\beta_{j}^{t})\in(0,1/ \alpha_{j}^{t})\quad\text{for}\quad 1/2<\beta_{j}<1,\]
and \(\epsilon=\gamma^{2}<\alpha_{j}^{t}\), we have \(\beta_{j}^{t}>\beta_{j}^{t+1}>0\). To the contrary, if \(\beta_{j}^{t}\in(0,1/2)\), then \(\beta_{j}^{t}<\beta_{j}^{t+1}<1\). Thus \(\beta_{j}^{t}\in(0,1)\). This means that \(\{c_{ji}\}_{i=1}^{d}\subset D_{j}^{t+1}\) inductively. Thus the next step \((\alpha^{t+1},\beta^{t+1})\) also satisfies (10) inductively. Moreover, since \(|\beta_{j}^{t+1}-\beta_{j}^{t}|\to 0\), \(\beta_{j}^{t}\) converges. Since \(\beta_{j}^{t+1}-\beta_{j}^{t}=0\) if and only if \(\beta_{j}^{t}=1/2\), thus \(\beta_{j}^{t}\) converges to \(1/2\). Therefore \(f_{N}(\alpha^{t},\beta^{t})\) converges to \(f_{N}^{\circ}\) pointwisely. Moreover we immediately have the following estimate:
\[\|f_{N}(\alpha^{t},\beta^{t})-f_{N}^{\circ}\|_{L^{r}}^{r}\lesssim t^{-\frac{1} {3}}|\partial\Omega_{N}^{\circ}|\lesssim t^{-1/3}.\]
This is the desired estimate.
## 5. Conclusion
In the previous approximation error analyses, ReLU deep neural networks had been crucially applied for constructing one-dimensional polynomials (spline functions), which is needed for wavelet expansions. In contrast with these, in this paper, we found a ReLU DNN architecture which is suitable for capturing convex shape of discontinuity on indicator functions (target functions), accompanied by pointwise convergence. Our next question would be what kind of ReLU-DNN architectures really attain pointwise convergence (or not) for mixed concave and convex discontinuities, and this is our future work.
**Acknowledgments.** I am grateful to Professors Masaharu Nagayama, Eiichi Nakai, Kengo Nakai, Yoshitaka Saiki and Yuzuru Sato for valuable comments. Research of TY was partly supported by the JSPS Grants-in-Aid for Scientific Research 20H01819 and 21K03304. This paper was a part of the lecture note on the class: Mathematical Analysis I (spring semester 2023) for undergrad/graduate courses in Hitotsubashi University.
|
2305.19424 | Quantifying Overfitting: Evaluating Neural Network Performance through
Analysis of Null Space | Machine learning models that are overfitted/overtrained are more vulnerable
to knowledge leakage, which poses a risk to privacy. Suppose we download or
receive a model from a third-party collaborator without knowing its training
accuracy. How can we determine if it has been overfitted or overtrained on its
training data? It's possible that the model was intentionally over-trained to
make it vulnerable during testing. While an overfitted or overtrained model may
perform well on testing data and even some generalization tests, we can't be
sure it's not over-fitted. Conducting a comprehensive generalization test is
also expensive. The goal of this paper is to address these issues and ensure
the privacy and generalization of our method using only testing data. To
achieve this, we analyze the null space in the last layer of neural networks,
which enables us to quantify overfitting without access to training data or
knowledge of the accuracy of those data. We evaluated our approach on various
architectures and datasets and observed a distinct pattern in the angle of null
space when models are overfitted. Furthermore, we show that models with poor
generalization exhibit specific characteristics in this space. Our work
represents the first attempt to quantify overfitting without access to training
data or knowing any knowledge about the training samples. | Hossein Rezaei, Mohammad Sabokrou | 2023-05-30T21:31:24Z | http://arxiv.org/abs/2305.19424v1 | # Quantifying Overfitting: Evaluating Neural Network Performance through Analysis of Null Space
###### Abstract
Machine learning models that are overfitted/overtrained are more vulnerable to knowledge leakage, which poses a risk to privacy. Suppose we download or receive a model from a third-party collaborator without knowing its training accuracy. How can we determine if it has been overfitted or overtrained on its training data? It's possible that the model was intentionally over-trained to make it vulnerable during testing. While an overfitted or overtrained model may perform well on testing data and even some generalization tests, we can't be sure it's not over-fitted. Conducting a comprehensive generalization test is also expensive. The goal of this paper is to address these issues and ensure the privacy and generalization of our method using only testing data. To achieve this, we analyze the null space in the last layer of neural networks, which enables us to quantify overfitting without access to training data or knowledge of the accuracy of those data. We evaluated our approach on various architectures and datasets and observed a distinct pattern in the angle of null space when models are overfitted. Furthermore, we show that models with poor generalization exhibit specific characteristics in this space. Our work represents the first attempt to quantify overfitting without access to training data or knowing any knowledge about the training samples. 1
Footnote 1: The source code will be available after the review.
## 1 Introduction
Deep learning models have been very successful in many applications such as computer vision, natural language processing, and speech recognition. These models are trained on large amounts of data and have demonstrated outstanding performance in tasks such as image classification, object detection, and language translation [1; 2]. However, despite their effectiveness, ensuring the privacy and trustworthiness of deep learning models remains a significant challenge [3; 4; 5].
In today's data-driven world, accessing pre-trained models has become increasingly common, whether obtained from the internet or delivered by third-party companies. However, it is crucial to ensure that these models uphold privacy standards and do not possess knowledge leakages. A key factor in determining the vulnerability of a model to membership inference attacks is the presence of overfitting. Generally, the more overfitted a model is, the more susceptible it becomes to such attacks. However, assessing this characteristic becomes challenging when we lack information about the
model's training accuracy or training data. In this paper, we aim to address this critical question and explore potential solutions for evaluating model vulnerability in situations where these crucial details are unavailable (see Fig. 1).
Generally, one of the key concerns in deep learning is that the models often memorize the training data [6; 7]. This means that the models may overfit the training data, resulting in poor generalization to new data. Additionally, the models may memorize sensitive information from the training data, which can pose a risk to privacy [8]. For instance, if a model has a privacy leakage, attackers may be able to extract sensitive information from the model during inference [9; 10]. If an attacker gains access to such information, it can have severe consequences for individuals or organizations.
A primary factor that makes a deep learning model vulnerable to privacy breaches is overfitting [9]. Researchers have proposed various methods to address this issue and enhance the privacy and trustworthiness of deep learning models. For example, [11; 12] leveraged differentially private training, [13; 14] exploited gradient clipping, and [15; 16] utilized machine unlearning to improve privacy and prevent knowledge leakage.
One of the simplest ways to detect overfitting is by comparing the accuracy of the model on the training and testing datasets. If the model achieves high accuracy on the training data but low accuracy on the testing data (low bias and high variance), it may be overfitting [7; 17]. However, obtaining the accuracy of the training data requires access to the training dataset, which may not always be feasible or ethical. Some papers, like [18; 10; 9], attempt to measure forgetting and memorization by conducting attacks while relying on training data to accomplish this task.
Another approach to detecting overfitting is by performing a generalization/robustness evaluation [19; 20; 21]. This test evaluates how well the model can generalize to new data by measuring its performance on a separate dataset that it has not seen before. If the model performs well on the generalization test, it is less likely to suffer from overfitting. However, this approach has some drawbacks. Firstly, it can be costly to implement, as it requires collecting several separate datasets for the generalization test and it takes too much time to perform multiple inferences from the model. Secondly, since generalization tests are widely known, an attacker could overtrain/overfit the model on those tests, making the model robust to those specific tests and potentially opening the door for privacy breaches.
In addition to the aforementioned methods, another simple approach to investigate the issue of overfitting is to examine the uncertainty of the model by analyzing the soft-max output or logits [22]. The idea is that probably there is a direct relationship between the uncertainty of the model and the overfitting. However, in Section 4.2, we demonstrate that this argument does not always hold true.
To address these challenges, we propose a novel method to detect overfitting and ensure the privacy and generalization of deep learning models using only a small amount of test data. The proposed method involves analyzing the null space in the last layer of neural networks, which enables us to quantify overfitting without access to the training data or knowledge of its accuracy. The null space is the set of all vectors that the neural network maps to zero. We interestingly find that by analyzing the angle between the null space of weights and the representation, can give us supervision to detect overfitting and determine the generalization performance of the model. The proposed method has been evaluated on various architectures and datasets, and the results show that there is a distinct pattern in the angle of the null space when models are overfitted. Furthermore, we illustrate that
Figure 1: We possess two downloaded models with no information regarding their training data or training accuracy. Additionally, we only have access to a limited subset of test data. Despite model 1 exhibiting higher test accuracy, it appears to suffer from overfitting, and existing current methods are unable to effectively discern this issue.
models exhibiting poor generalization display specific characteristics within this space. _The proposed method represents one of the first attempts to quantify overfitting without access to training data or any knowledge about the training samples_. The method is easy to implement and can be applied to various architectures and datasets, making it a promising tool to enhance the privacy of deep learning models.
## 2 Related Work
In this section, we provide a review delving into the literature attempting to measure the overfitting and generalization capability of machine learning models. Furthermore, we explore several works that leverage null space across various applications using neural networks.
### Overfitting & Generalization
Overfitting arises when a model becomes too complex and memorizes the training data instead of learning the representative patterns, resulting in failure to generalize well to unseen datasets. To address this issue, Werbachowski et al. [23] introduce a non-intrusive statistical test using adversarial examples to detect test set overfitting in machine learning models. Yet, one notable challenge they highlighted is accurately measuring test set overfitting due to shifts in data distribution. Moreover, Jagielski et al. [9] explored memorization, forgetting, and their impact on overfitting via a privacy attack method. They train two models with extra data to measure forgetting, using the success rate to identify retained sensitive information and discarded irrelevant or noisy information. Carlini et al. [10] consider a testing approach that evaluates the level of risk associated with generative sequence models inadvertently memorizing infrequent or distinct training data sequences. To assess the level of overfitting in convolutional neural networks (CNNs), PHOM [24] employs trained network weights to create clique complexes on CNN layers. By examining co-adaptations among neurons via one-dimensional persistent homology (PH), it detects overfitting without relying on training data. PHOM differs from our work in terms of efficiency and complexity.
Generalization and Out-Of-Distribution (OOD) generalization refer to our model's ability to adapt appropriately to new, previously unseen data drawn from either the same distribution or a different distribution as the training data, respectively. To address this issue, Neyshabur et al. [25] connect sharpness to PAC-Bayes theory and show that expected sharpness, a measure of network output change with input change, along with weight norms, can capture neural network generalization behavior effectively. Some other works attempt to evaluate the generalization of deep networks by defining bounds. For instance, Liang et al. [26] introduce the Fisher-Rao norm, an invariant measure based on information geometry. It quantifies the local inner product on positive probability density functions (PDFs) and relates the loss function to the negative logarithm of conditional probability, with Fisher information as the gradient. Kuang et al. [27], and Shen et al. [28] use the average accuracy to measure OOD generalization. While Duchi et al. [29], and Esfahani et al. [30] measure the OOD generalization using worst-case accuracy.
Unlike current approaches, we propose to measure overfitting and generalization without access to the training data or training accuracy, utilizing only a small subset of the test set to determine the degree of overfitting and generalization capability.
### Null Space
The concept of the null space is important across diverse domains of mathematics, such as linear algebra, differential equations, and control theory. In the context of neural networks, the null space of a weight matrix can be used for various applications. Most research work is focused on analyzing null space for out-of-distribution detection.
In novelty detection, Bodesheim et al. [31] use null space for detecting samples from unknown classes in object recognition by mapping training samples to a single point, enabling joint treatment of multiple classes and novelty detection in one model. IKNDA [32] addressed the demanding computational burden caused by kernel matrix eigendecomposition in this method and performed novelty detection by extracting new information from newly-added samples, integrating it with the existing model, and updating the null space basis to add a single point to the subspace. For outlier detection, Null Space Analysis (NuSA) [33] is proposed to detect outliers in neural networks using
weight matrix null spaces in each layer. It provides competency awareness in ANNs and tackles adversarial data points by controlling null space projection. Likewise, Wang et al. [34] utilize null space to measure out-of-distribution (OOD) degree. by decomposing feature vectors, generating confident outlier images, and subsequently calculating angle-based OOD score. Additionally, Idnani et al. [35] explore the null space's impact on OOD generalization, introducing null space occupancy as a failure mode in neural networks. They optimize network weights using orthogonal gradient descent to reduce null space occupancy, which enhances generalization.
Drawing inspiration from the application of null space in out-of-distribution detection, we employ null space properties to assess both the degree of overfitting and the generalization capacity of machine learning models.
## 3 Proposed Method
The aim of this study is to explore how to detect overfitting in machine learning models without prior knowledge of the training samples or accuracy. We discovered a close correlation between the weights associated with each class and the representation. If the weight for each class is orthogonal to the representation, it means that the input does not belong to that class. Conversely, if the weight is in the same direction as the representation, the angle between them is close to zero, indicating that the input belongs to that class. Although the main concepts of different classes are different, there are some common/shared characteristics between them. Therefore, the angle between the representation and the weight of targeted class should be close to zero, but there should be some gap/angles that reflect the relationship with other classes. We found that when a model is over-fitted or over-trained, it loses of relationship with other classes (the angle between the target class weight and the representation is very close to zero) and less generalization, which can result in overfitting. To apply these findings, we propose monitoring the angle between the weights and the representation during model training. Our approach provides a simple and effective method for detecting overfitting in machine learning models. By using our proposed method, we can detect models that generalize well to new data and avoid overfitting, even without prior knowledge of the training data or accuracy.
Formally speaking, We investigate the concepts of overfitting from a null space perspective. As mentioned, the goal is to determine whether models are overfitted or not and analyze their generalization capability. To accomplish this, suppose \(\mathcal{M}\) is a set of models \(\mathcal{M}=\{m_{1},m_{2},..,m_{k}\}\), and access the test data (or validation samples) \(\mathcal{X}=\{x_{1},x_{2},..,x_{n}\}\). \(\mathcal{X}\) feeds into the network aiming to represent it i.e., \(\mathcal{R}=\{r_{1},r_{2},..,r_{n}\}\) (\(r_{i}\) corresponds the \(x_{i}\) samples). Then we leverage the angle between \(r_{i}\) and the null space of weights that are not associated with ground truth, as well as the angle between \(r_{i}\) and the weights associated with ground truth, to establish two scores for measuring overfitting and generalization. These Scores are defined as follows:
\[\mathcal{O}=\alpha+\beta,\]
\[\mathcal{G}=\frac{\alpha}{max(\alpha)}+\frac{|\beta|}{max(|\beta|)},\]
Where, \(\mathcal{O}\) denotes the degree of overfitting, while \(\mathcal{G}\) represents the amount of generalization capability. \(\alpha\) denotes the average of the angles between the \(\mathcal{R}\) and weight vector of target classes, while \(\beta\) defines the average angles between \(\mathcal{R}\) and the null space of weight vectors (column space) of false classes.
### Null Space & Column Space
In linear algebra, the null space and column space are two fundamental subspaces associated with a matrix. The null space is sometimes called the kernel of the matrix, while the column space is sometimes called the range of the matrix. The null space of an m x n matrix A is a subspace of \(\mathcal{R}^{n}\), written as \(Nul(A)\), and defined by:
\[Nul(A)=\{x\in\mathcal{R}^{n}\,|\,Ax=0\}\]
Where A refers to a linear mapping. Geometrically, the null space represents all the directions in which the matrix A "collapses" to zero. The column space of an m x n matrix A, written as \(Col(A)\)
is a subspace of \(\mathcal{R}^{m}\), and is the set of all linear combinations of the columns of A. In other words, The column space of a matrix A is the span of its columns. Geometrically, the column space represents the "shadow" of the matrix A, as cast onto a lower-dimensional subspace. If \(A=[a_{1},...,a_{n}]\), then
\[Col(A)=Span\{a_{1},...,a_{n}\}\]
Now, it is true that the null space and column space of a matrix are orthogonal complements of each other. This means that any vector in the null space is orthogonal to any vector in the column space, and vice versa. In fact, the left null space is orthogonal to the column space of A. To see why this is true, consider the Appendix A.
### Over-fitting & Generalization Measurement
To analyze overfitting, we split the weight vectors into two groups: The first group for the weights vector of the true class (target class), and the second group for the weights vectors of the false classes. We then analyze the behavior of the representation vector toward these two groups.
Null Space Angle.In deep learning models, we use the inner product. Specifically, we use the following formula:
\[y=w^{T}\cdot x\]
where \(<\cdot>\) represents the inner product between the representation vector and the transpose of weight vectors, and y represents the logits. Since the vectors in the second group represent false classes, their logits should have low values and should not significantly influence the output decision. Therefore, the angle between the weight vectors of false classes (group 2) and \(x\) should be close to 90 degrees. Furthermore, as discussed in [34], the dimensions of the representation vector are typically larger than the dimensions of the logits. This can result in some information loss when the representation vector is fed into the MLP layers. By leveraging the null space, and the behavior of the representation vector toward this space we can potentially recover some of this lost information, which may be useful for analyzing overfitting and generalization.
The space spanned by the vectors in the second group is known as the column space. As previously mentioned, the null space is orthogonal to this space. For the aforementioned reasons, to analyze
Figure 2: Our method is based on a simple framework. We randomly select a subset of the test data to compare the degree of overfitting of the two models. The weights vector of the target class is denoted by \(W_{1}\), while the Null Space plane corresponds to the Null Space associated with the weights vectors of the false classes (\(W_{2}\&W_{3}\)). To compute the degree of overfitting, we first pass the test samples through the encoder to obtain their representation vectors. Next, we measure the angle between the representation vector and the null space plane (\(\beta\)) and the angle between the representation vector and the true class weight vector (\(\alpha\)) ( we perform this process for all samples and ultimately calculate the average). Finally, we compute the sum of \(\alpha\) and \(\beta\), which serves as a quantitative measure of the degree of overfitting.
the relationship between the representation vector and the vectors in the second group, we utilize the concept of null space. In other words, we measured the angle between the null space and the representation vector. In this way, We found that this angle provides us with useful information for analyzing overfitting. In Fig. 2, \(\beta\) represents this angle.
True AngleAs previously discussed in the Null Space Angle section, deep learning models use the inner product to predict the output. Since the output of the network depends on the argmax of Logits/SoftMax, the corresponding logit for the inner product of the representation vector and the vector from group 1 (the weight vector of the target class) should have the maximum value among the other logits. To ensure this, the angle between the representation vector and the vector from group 1 should be close to zero.
We analyzed this angle and found that it provides us with some information about overfitting and generalization. Therefore, we measured it to determine the degree of overfitting and generalization capability. In Fig. 2, \(\alpha\) represents this angle.
OverfittingWe have observed that when the network exhibits good forgetting (the network was not overfitted), it tends to optimize two things simultaneously. Firstly, it minimizes the angle between the representation vector and the correct class weight vector (i.e., the vector associated with group 1) to ensure that the representation vector to what extent is aligned with the correct class. Secondly, it maximizes the angle between the representation vector and the null space.
In other words, the network is trained to adjust its learnable weights and parameters in a way that moves the representation vector away from the null space while simultaneously maximizing its projection onto the vector of group 1. This process leads to a decrease in the value of \(\alpha\) and an increase in the absolute value of \(\beta\) (or a decrease in \(\beta\) itself), resulting in an overall decrease in the sum of \(\alpha\) and \(\beta\). \(\alpha\) represents to what extent the network correctly predicts the label, while \(\beta\) indicates how likely the network considers the representation vector to be similar to other classes.
Therefore, the sum of \(\alpha\) and \(\beta\) serves as an indicator of the degree of overfitting. The lower this value is, the less overfitting the model is.
**Important note:** In fact, the angle between the representation vector and the weights vector of the target class (denoted as \(\alpha\) in Fig. 2) indicates the relationship between the input image and the target class. Meanwhile, the angle between the representation vector and the null space (denoted as \(\beta\) in Fig. 2) reflects the average behavior of the input image towards false classes. For instance, let's consider an example using CIFAR10, where our input sample is a cat. In this case, the representation vector should be close to the weights vector of the cat target class. Furthermore, since there is another class called "dog" in CIFAR10, the representation vector of the cat should be slightly closer to the weights vector of the "dog" class (since the cat and dog bear some resemblance to each other; slightly less than 90 degrees).
On the other hand, classes like "ship" and "truck" have no similarity to the "cat" class. Hence, the angle between the representation vector and the weights vectors of these classes should be slightly greater than 90 degrees. It is worth noting that the angle between the representation vector and the null space (denoted as \(\beta\) in Fig. 2) represents the average behavior of the input image towards false classes. Therefore, on average, the angle between the representation vector and this space should be more than 90 degrees. Due to this reason, as this angle is greater than 90 degrees, it falls on the right side of the space, and we consider this angle as negative.
GeneralizationWe have discovered that when the network possesses good generalization capabilities, it attempts to reduce the projection of the representation vector onto both the null space and the weight vector of the target class. This behavior resembles the concept of overfitting, wherein the network strives to move the representation vector away from the null space. However, in terms of moving the representation vector away from the weight vector of the target class (true class), it opposes overfitting. It is important to note, as demonstrated by the results of various corruptions (as shown in Fig. 4), that a model with less overfitting does not necessarily exhibit superior generalization ability. Hence, we have come to understand that if a model aims to possess both high generalization ability and reduced overfitting While increasing the angle of the representation vector with the null space, it should establish a balance between a small or large angle of the representation vector and
the weights vector of the target class. Essentially, the projection should neither be excessively high nor too low.
## 4 Experiments
We evaluate the performance of our method on several widely-used convolutional neural network architectures, including ResNet18, ResNet34, ResNet50, DenseNet121, VGG19, and MobileNetV2, using three different datasets: CIFAR10, SVHN, and CIFAR100. To conserve space, we present the results for the ResNet18 architecture on CIFAR10 in the main text, with additional experiments provided in Appendix B.
### Setup
CIFAR10.The CIFAR10 dataset is a widely-used image classification dataset comprising 60,000 32x32 color images across 10 classes. To evaluate the performance of our method, we trained 11 different ResNet18 models with and without data augmentation and dropout on this dataset (with different epochs). This allowed us to obtain models with varying generalization and overfitting capabilities. For instance, in Model 8 (as shown in Fig. 3 and Table 1), the first two layers of ResNet18 were utilized with data augmentation and dropout techniques (obtained in epoch 148).
Svhn.The SVHN dataset is another commonly used dataset for image classification tasks consisting of 600,000 labeled digit images. For this dataset, we followed the same methodology as for CIFAR10. Results are available in Appendix B.
Cifar100.The CIFAR100 dataset is similar to CIFAR10, but it differs in the number of classes and images per class. Specifically, it consists of 100 classes, with each class containing 600 images. We trained this data set on ResNet34, ResNet50, DenseNet121, VGG19, and MobileNetV2. And then evaluated our method on these models. The results are available in Appendix B.
### Results
OverfittingAfter training the 11 different ResNet18 models on CIFAR10 we randomly selected a small subset of the test data CIFAR10 and fed it into these models. To assess overfitting, we measured the values of \(\alpha\) and \(\beta\) for these samples and then calculated their average. The results are presented in Table 1 and also plotted in Fig. 3. Our analysis shows that as the degree of overfitting increases, the sum of \(\alpha\) and \(\beta\) also increases, which is reflected in the "\(\mathcal{O}\)" column of Table 1. As shown in Fig. 3, the size of the circles (angles) increases as we move from the bottom to the top or from right to left, indicating an increase in overfitting. Conversely, when we move diagonally from the bottom left to the top right, the size of the circles (angles) decreases (\(\mathcal{O}\) decrease), indicating a decrease in overfitting.
The results for the SVHN and CIFAR100 are available in Appendix B.
Figure 3: (a) shows the results of our method applied to the 11 different ResNet18 models that were trained on CIFAR10. The size of the circles in the plot corresponds to the degree of overfitting, as measured by the ”\(\mathcal{O}\)” values in Table 1. (b) and (c) show the SoftMax and Logit outputs of these models, respectively.
GeneralizationAfter completing the training process for 11 distinct ResNet18 models on CIFAR10, we proceeded to randomly select a small subset of the CIFAR10 test data. This subset of data was then used as input for these models to assess their generalization capability. To quantify this capability, we calculated the \(\alpha\) and \(\beta\) values for these samples and computed their average. Subsequently, we normalized the \(\alpha\) value by dividing it by the maximum value, denoted as \(\alpha^{\prime}\) in Table 2. Similarly, for \(\beta\), we first calculated its absolute value and then normalized it by dividing it by the maximum value, denoted as \(\beta^{\prime}\) in Table 2. Our analysis revealed that as the generalization capability increased, the sum of \(\alpha^{\prime}\) and \(\beta^{\prime}\) also increased, as indicated in the \(\mathcal{G}\) column of Table 2. This trend is further visualized in Fig. 4.
To validate the effectiveness of our proposed method for analyzing generalization, we conducted a series of generalization tests. Specifically, we applied various data corruptions, including adjust_sharpness, adjust_brightness, gaussian_blur, perspective, adjust_hue, and rotate, to the test data. We then evaluated the performance of the models on these distributional shifts. The accuracy values were averaged, and are presented as the "\(Corruption\)" in Table 2. Additionally, we visualized our metric and the accuracy under these corruptions in Fig. 4. As you can observe, our metric aligns with the accuracy of data corruption, demonstrating the effectiveness of our method.
The results for the SVHN are available in Appendix B.
SoftMax & Logits.A common method that comes to mind for detecting overfitting is to examine the softmax or logit outputs of the model. This involves calculating the softmax or logit values for the ground truth label of each sample and then averaging these values across all samples. However, as shown in Fig. 3 and Table 1, there are cases where this method fails to detect overfitting. For example, consider models 1 and 2, where the softmax or logit outputs do not indicate overfitting despite evidence of overfitting based on other metrics.
Angles of deeper architectures.During our analysis of deeper architectures on CIFAR100, we discovered that increasing the number of layers in a model (creating a deeper architecture) leads to a greater range of angles the model can achieve. For instance, let's consider a ResNet18 model and a DenseNet121 model, both exhibiting identical training and testing accuracy. In this case, the DenseNet121 model will have a higher sum of \(\alpha\) and \(\beta\). We believe that this is because deeper models strive to attain superior generalization ability while mitigating the issue of overfitting.
The results supporting this observation can be found in Appendix B.
Ablation study.We have analyzed the size of the test set in relation to our proposed methods. This analysis reveals that the size of the test data has a minimal impact on the measures of overfitting and generalizability that we have put forward.
Figure 4: (a) illustrates the average accuracy across 5 different data corruptions for 11 distinct ResNet18 models trained on CIFAR10. Meanwhile, (b) showcases the outcomes of our generalization analysis method, which closely aligns with the accuracy observed for the data corruption.
The evidence substantiating this observation is available in Appendix C.
## 5 Conclusion
This paper addresses the issue of determining if a downloaded or received model has been overfitted without knowledge of its training accuracy or data. Overfitted models are more vulnerable to knowledge leakage, posing privacy risks. The proposed method analyzes the null space in the last layer of neural networks, quantifying overfitting and generalization using only a small subset of the testing data. The approach was evaluated on different architectures and datasets, revealing distinct patterns in the null space angle for overfitted models and poor generalization characteristics. This novel method provides insights into model vulnerability without training data, enhancing privacy and trustworthiness in deep learning models.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Our Method} & \multicolumn{3}{c}{Conference} & \multicolumn{3}{c}{Accuracy} \\ \cline{2-9} Model & \(\alpha\) & \(\beta\) & \(\mathcal{O}\) & \(SoftMax\) & \(Logits\) & \(Train\) & \(Test\) & \(Difference\) \\ \hline
1 & 59.61 & -27.28 & **32.32** & **0.8906** & 7.88 & 99.99 & 94.78 & **5.21** \\
2 & 78.82 & -11.55 & **67.27** & **0.8078** & 7.47 & 100.0 & 84.04 & **15.96** \\
3 & 60.58 & -26.42 & 34.17 & 0.8649 & 8.09 & 97.73 & 92.42 & 5.31 \\
4 & 70.47 & -19.00 & 51.47 & 0.8373 & 9.53 & 97.69 & 90.06 & 7.63 \\
5 & 64.72 & -23.68 & 41.04 & 0.8377 & 9.16 & 95.75 & 90.02 & 5.73 \\
6 & 70.32 & -19.40 & 50.92 & 0.8361 & 9.71 & 95.66 & 88.67 & 6.99 \\
7 & 61.75 & -25.13 & 36.62 & 0.8240 & 7.29 & 91.68 & 88.87 & 2.81 \\
8 & 70.24 & -18.70 & 51.54 & 0.7834 & 7.61 & 86.31 & 84.24 & 2.07 \\
9 & 63.77 & -22.17 & 41.60 & 0.7782 & 6.28 & 82.77 & 82.52 & 0.25 \\
10 & 70.10 & -18.08 & 52.02 & 0.7786 & 7.50 & 82.52 & 80.31 & 2.21 \\
11 & 64.87 & -21.18 & 43.68 & 0.7978 & 6.55 & 79.77 & 79.52 & 0.25 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results of training various ResNet18 on CIFAR10. The ’\(\alpha\)’ value indicates the angle between the representation vector and the true class weight vector, while ’\(\beta\)’ shows the angle between the representation vector and the null space. The sum of ’\(\alpha\)’ and ’\(\beta\)’, denoted as ’\(\mathcal{O}\)’, indicates the degree of overfitting.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{3}{c}{Our Method} & \multicolumn{3}{c}{Accuracy} \\ \cline{2-7} Model & \(\alpha^{\prime}\) & \(\beta^{\prime}\) & \(\mathcal{G}\) & \(Train\) & \(Test\) & \(Corruption\) \\ \hline
1 & 0.7563 & 1.0 & 1.7563 & 99.99 & 94.78 & 55.128 \\
2 & 1.0 & 0.4234 & 1.4234 & 100.0 & 84.04 & 35.666 \\
3 & 0.7686 & 0.9685 & 1.7371 & 97.73 & 92.42 & 51.896 \\
4 & 0.8941 & 0.6965 & 1.5906 & 97.69 & 90.06 & 46.446 \\
5 & 0.8211 & 0.8680 & 1.6891 & 95.75 & 90.02 & 45.584 \\
6 & 0.8922 & 0.7111 & **1.6033** & 95.66 & 88.67 & **46.716** \\
7 & 0.7834 & 0.9212 & **1.7046** & 91.68 & 88.87 & **47.51** \\
8 & 0.8911 & 0.6855 & 1.5766 & 86.31 & 84.24 & 40.33 \\
9 & 0.8091 & 0.8127 & 1.6218 & 82.77 & 82.52 & 42.136 \\
10 & 0.8894 & 0.6628 & 1.5522 & 82.52 & 80.31 & 41.76 \\
11 & 0.8230 & 0.7764 & 1.5994 & 79.77 & 79.52 & 45.066 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The variable \(\alpha^{\prime}\) represents the normalized angle between the representation vector and the target weights vector, while \(\beta^{\prime}\) indicates the normalized absolute value of the angle between the representation vector and null space. The sum of \(\alpha^{\prime}\) and \(\beta^{\prime}\), denoted by \(\mathcal{G}\), provides an indication of the model’s generalization capability. The \(Corruption\) represents the average accuracy across 5 different data corruptions for 11 distinct ResNet18 models trained on CIFAR10. As demonstrated, our metric aligns with the corruption accuracies observed. |
2308.04106 | Parallel Learning by Multitasking Neural Networks | A modern challenge of Artificial Intelligence is learning multiple patterns
at once (i.e.parallel learning). While this can not be accomplished by standard
Hebbian associative neural networks, in this paper we show how the Multitasking
Hebbian Network (a variation on theme of the Hopfield model working on sparse
data-sets) is naturally able to perform this complex task. We focus on systems
processing in parallel a finite (up to logarithmic growth in the size of the
network) amount of patterns, mirroring the low-storage level of standard
associative neural networks at work with pattern recognition. For mild dilution
in the patterns, the network handles them hierarchically, distributing the
amplitudes of their signals as power-laws w.r.t. their information content
(hierarchical regime), while, for strong dilution, all the signals pertaining
to all the patterns are raised with the same strength (parallel regime).
Further, confined to the low-storage setting (i.e., far from the spin glass
limit), the presence of a teacher neither alters the multitasking performances
nor changes the thresholds for learning: the latter are the same whatever the
training protocol is supervised or unsupervised. Results obtained through
statistical mechanics, signal-to-noise technique and Monte Carlo simulations
are overall in perfect agreement and carry interesting insights on multiple
learning at once: for instance, whenever the cost-function of the model is
minimized in parallel on several patterns (in its description via Statistical
Mechanics), the same happens to the standard sum-squared error Loss function
(typically used in Machine Learning). | Elena Agliari, Andrea Alessandrelli, Adriano Barra, Federico Ricci-Tersenghi | 2023-08-08T07:43:31Z | http://arxiv.org/abs/2308.04106v1 | # Parallel Learning by Multitasking Neural Networks
###### Abstract
A modern challenge of Artificial Intelligence is learning multiple patterns at once (i.e. _parallel learning_). While this can not be accomplished by standard Hebbian associative neural networks, in this paper we show how the Multitasking Hebbian Network (a variation on theme of the Hopfield model working on sparse data-sets) is naturally able to perform this complex task. We focus on systems processing in parallel a finite (up to logarithmic growth in the size of the network) amount of patterns, mirroring the low-storage level of standard associative neural networks at work with pattern recognition. For mild dilution in the patterns, the network handles them hierarchically, distributing the amplitudes of their signals as power-laws w.r.t. their information content (hierarchical regime), while, for strong dilution, all the signals pertaining to all the patterns are raised with the same strength (parallel regime).
Further, confined to the low-storage setting (i.e., far from the spin glass limit), the presence of a teacher neither alters the multitasking performances nor changes the thresholds for learning: the latter are the same whatever the training protocol is supervised or unsupervised. Results obtained through statistical mechanics, signal-to-noise technique and Monte Carlo simulations are overall in perfect agreement and carry interesting insights on _multiple learning at once_: for instance, whenever the cost-function of the model is minimized _in parallel on several patterns_ (in its description via Statistical Mechanics), the same happens to the standard sum-squared error Loss function (typically used in Machine Learning).
###### Contents
* 1 Introduction
* 2 Parallel learning in multitasking Hebbian neural networks
* 2.1 A preliminary glance at the emergent parallel retrieval capabilities
* 2.2 From parallel storing to parallel learning
* 3 Parallel Learning: the picture by statistical mechanics
* 3.1 Study of the Cost function and its related Statistical Pressure
* 3.1.1 Low-entropy data-sets: the Big-Data limit
* 3.1.2 Ergodicity breaking: the critical phase transition
* 3.2 Stability analysis via standard Hessian: the phase diagram
* 3.2.1 Ergodic state: \(\bar{\mathbf{n}}=\bar{n}_{d,\rho,\beta}(0,\ldots,0)\)
* 3.2.2 Pure state: \(\bar{\mathbf{n}}=\bar{n}_{d,\rho,\beta}(1,0,\ldots,0)\)
* 3.2.3 Parallel state: \(\bar{\mathbf{n}}=\bar{n}_{d,\rho,\beta}(1,\ldots,1)\)
* 3.2.4 Hierarchical state: \(\bar{\mathbf{n}}=\bar{n}_{d,\rho,\beta}((1-d),d(1-d),d^{2}(1-d),...)\)
* 3.3 From the Cost function to the Loss function
* 4 Conclusions
* A A more general sampling scenario
* B On the data-set entropy \(\rho\)
* B.1 I: multitasking Hebbian network equipped with not-affecting-dilution noise
* B.2 II: multitasking Hebbian network equipped with not-preserving-dilution noise
* C Stability analysis: an alternative approach
* C.1 Stability analysis via signal-to-noise technique
* C.2 Evaluation of momenta of the effective post-synaptic potential
* D Explicit Calculations and Figures for the cases \(K=2\) and \(K=3\)
* D.1 \(K=2\)
* D.2 \(K=3\)
* E Proofs
* E.1 Proof of Theorem 1
* E.2 Proof of Proposition 1
Introduction
Typically, Artificial Intelligence has to deal with several inputs occurring at the same time: for instance, think about automatic driving, where it has to distinguish and react to different objects (e.g., pedestrians, traffic lights, riders, crosswalks) that may appear simultaneously. Likewise, when a biological neural network learns, it is rare that it has to deal with one single input per time1: for instance, while trained in the school to learn any single letter, we are also learning about the composition of our alphabets. In this perspective, when stating that neural networks operate _in parallel_, some caution in potential ambiguity should be paid. To fix ideas, let us focus on the Hopfield model [34], the _harmonic oscillator_ of associative neural networks accomplishing pattern recognition [8; 25]: its neurons indeed operate synergistically in parallel but with the purpose of retrieving one single pattern per time, not several simultaneously [8; 10; 36]. A parallel processing where multiple patterns are simultaneously retrieved cannot be accessible to the standard Hopfield networks as long as each pattern is fully informative, namely its vectorial binary representation is devoid of blank entries. On the other hand, when a fraction of entries can be blank [14] multiple-pattern retrieval is potentially achievable by the network. Intuitively, this can be explained by noticing that the overall number of neurons making up the networks - and thus available for information processing - equals the length of the binary vectors codifying the patterns to be retrieved, hence, as long as these vectors contain information in all their entries, there is no free room for dealing with multiple patterns. Conversely, the multitasking neural networks, introduced in [2], are able to overcome this limitation and have been shown to succeed in retrieving multiple patterns simultaneously, just by leveraging the presence of lacunae in the patterns stored by the network. The emerging pattern-recognition properties have been extensively investigated at medium storage (i.e., on random graphs above the percolation threshold) [23], at high storage (i.e., on random graphs below the percolation threshold) [24] as well as on scale-free [44] and hierarchical [3] topologies.
Footnote 1: It is enough to note that, should serial learning take place rather than parallel learning, Pavlov’s Classical Conditioning would not be possible [14].
However, while the study of the parallel retrieval capabilities of these multi-tasking networks is nowadays over, the comprehension of their parallel learning capabilities just started and it is the main focus of the present paper.
In these regards it is important to stress that the Hebbian prescription has been recently revised to turn it from a storing rule (built on a set of already definite patterns, as in the original Amit-Gutfreund-Sompolinksy (AGS) theory) into a genuine learning rule (where unknown patterns have to be inferred by experiencing solely a sample of their corrupted copies), see e.g., [5; 13; 27]2.
Footnote 2: While Statistical Learning theories appeared in the Literature a long time ago, see e.g. [1; 29; 43] for the original works and [6; 20; 26; 41] for updated references, however the statistical mechanics of Hebbian learning was not deepened in these studies.
In this work we merge these extensions of the bare AGS theory and use definite patterns (equipped with blank entries) to generate a sparse data-set of corrupted examples, that is the solely information experienced by the network: we aim to highlight the role of lacunae density and of the data-set size and quality on the network performance, in particular deepening the way the network learns simultaneously the patterns hidden behind the supplied examples. In this investigation we focus on the low-storage scenario (where the number of definite patterns grows sub-linearly with the volume of the network) addressing both the _supervised_ and the _unsupervised_ setting.
The paper is structured as follows: the main text has three Sections. Beyond this Introduction provided in Section 1, in Section 2 we revise the multi-tasking associative network; once briefly sum
marized its parallel retrieval capabilities (Sec. 2.1), we introduce a simple data-set the network has to cope with in order to move from the simpler storing of patterns to their learning from examples (Sec. 2.2). Next, in Section 3 we provide an exhaustive statistical mechanical picture of the network's emergent information processing capabilities by taking advantage of Guerra's interpolation techniques: in particular, focusing on the Cost function (Sec. 3.1), we face the _big-data_ limit (Sec. 3.1.1) and we deepen the nature of the phase transition the network undergoes as ergodicity breaking spontaneously takes place (Sec. 3.1.2). Sec. 3.2 is entirely dedicated to provide phase diagrams (namely plots in the space of the control parameters where different regions depict different global computational capabilities). Further, before reaching conclusions and outlooks as reported in Sec. 4, in Sec. 3.3 we show how the network's Cost function (typically used in Statistical Mechanics) can be sharply related to standard Loss functions (typically used in Machine Learning) to appreciate how parallel learning effectively lowers several Loss functions at once.
In the Appendices we fix a number of subtleties: in Appendix A we provide a more general setting for the sparse data-sets considered in this research3, while in Appendix B we inspect the relative entropies of these data-sets and finally in Appendix C we provide a revised version of the Signal-to-Noise technique (that allows to evaluate computational shortcuts beyond providing an alternative route to obtain the phase diagrams). Appendices D and E give details on calculations, plots and proofs of the main theorems.
Footnote 3: In the main text we face the simplest kind of pattern’s dilution, namely we just force to be blank the same fraction of their entries whose position is preserved in the generation of the data-sets (hence whenever the pattern has a zero, in all the examples it gives rise to, the zero will be kept), while in the appendix we relax this assumption (and blank entries can move along the examples yet preserving their amount). As in the thermodynamic limit the theory is robust w.r.t. these structural details we present as a main theme the simplest setting and in the appendix A the more cumbersome one.
## 2 Parallel learning in multitasking Hebbian neural networks
### A preliminary glance at the emergent parallel retrieval capabilities
Hereafter, for the sake of completeness, we briefly review the retrieval properties of the multitasking Hebbian network in the low-storage regime, while we refer to [2; 4] for an extensive treatment.
**Definition 1**.: _Given \(N\) Ising neurons \(\sigma_{i}=\pm 1\) (\(i=1,...,N\)), and \(K\) random patterns \(\mathbf{\xi}^{\mu}\) (\(\mu=1,...,K\)), each of length \(N\), whose entries are i.i.d. from_
\[\mathbb{P}(\xi_{i}^{\mu})=\frac{(1-d)}{2}\delta_{\xi_{i}^{\mu},-1}+\frac{(1-d) }{2}\delta_{\xi_{i}^{\mu},+1}+d\delta_{\xi_{i}^{\mu},0}, \tag{1}\]
_where \(\delta_{i,j}\) is the Kronecker delta and \(d\in[0,1]\), the Hamiltonian (or cost function) of the system reads as_
\[\mathcal{H}_{N}(\mathbf{\sigma}|\mathbf{\xi}):=-\frac{1}{2N}\sum_{\begin{subarray}{c}i,j\\ i\neq j\end{subarray}}^{N,N}\left(\sum_{\mu=1}^{K}\xi_{i}^{\mu}\xi_{j}^{\mu} \right)\sigma_{i}\sigma_{j}. \tag{2}\]
The parameter \(d\) tunes the "dilution" in pattern entries: if \(d=0\) the standard Rademacher setting of AGS theory is recovered, while for \(d=1\) no information is retained in these patterns: otherwise stated, these vectors display, on average, a fraction \(d\) of blank entries.
**Definition 2**.: _In order to assess the network retrieval performance we introduce the \(K\) Mattis magnetizations_
\[m_{\mu}:=\frac{1}{N}\sum_{i}^{N}\xi_{i}^{\mu}\sigma_{i},\ \mu=1,...,K, \tag{3}\]
_which quantify the overlap between the generic neural configuration \(\mathbf{\sigma}\) and the \(\mu^{th}\) pattern._
Note that the cost function (2) can be recast as a quadratic form in \(m_{\mu}\), namely
\[\mathcal{H}_{N}(\mathbf{\sigma}|\mathbf{\xi})=-\frac{N}{2}\sum_{\mu}m_{\mu}^{2}+\frac{ K}{2}, \tag{4}\]
where the term \(K/2\) in the r.h.s. stems from diagonal terms (\(i=j\)) in the sum at the r.h.s. of eq. 2 and in the low-load scenario (i.e., \(K\) grows sub-linearly with \(N\)) can be neglected in the thermodynamic limit (\(N\to\infty\)).
As we are going to explain, the dilution ruled by \(d\) is pivotal for the network in order to perform parallel processing. It is instructive to first consider a toy model handling just \(K=2\) patterns: let us assume, for simplicity, that the first pattern \(\mathbf{\xi}^{1}\) contains information (i.e., no blank entries) solely in the first half of its entries and the second pattern \(\mathbf{\xi}^{2}\) contains information solely in the second half of its entries, that is
\[\mathbf{\xi}^{1}=\underbrace{(\xi_{1}^{1},...,\xi_{N/2}^{1},0,...,0)}_{\in\{-1,+ 1\}^{\frac{N}{2}}},\quad\mathbf{\xi}^{2}=(\underbrace{0,...,0}_{\in\{0\}^{\frac{N }{2}}},\underbrace{\xi_{N/2+1}^{1},...,\xi_{N}^{1}}_{\in\{-1,+1\}^{\frac{N}{2} }}) \tag{5}\]
Unlike the standard Hopfield reference (\(d=0\)), where the retrieval of one pattern employs all the resources and there is no chance to retrieve any other pattern, not even partially (i.e., as \(m_{1}\to 1\) then \(m_{2}\approx 0\) because patterns are orthogonal for large \(N\) values in the standard random setting), here nor \(m_{1}\) neither \(m_{2}\) can reach the value \(1\) and therefore the complete retrieval of one of the two still leaves resources for the retrieval of the other. In this particular case, the minimization of the cost function \(\mathcal{H}_{N}(\mathbf{\sigma}|\mathbf{\xi})=-\frac{N}{2}\left(m_{1}^{2}+m_{2}^{2}\right)\) is optimal when _both_ the magnetizations are equal to one-half, that is when they both saturate their upper bound. In general, for arbitrary dilution level \(d\), the minimization of the cost function requires the network to be in one of the following regimes
* _hierarchical scenario_: for values of dilution not too high (i.e., \(d<d_{c}\), _vide infra_), one of the two patterns is fully retrieved (say \(m_{1}\approx 1-d\)) and the other is retrieved to the largest extent given the available resources, these being constituted by, approximately, the \(Nd\) neurons corresponding to the blank entries in \(\mathbf{\xi}^{1}\) (thus, \(m_{2}\approx d(1-d)\)), and so on if further patterns are considered.
* _parallel scenario_: for large values of dilution (i.e., above a critical threshold \(d_{c}\)), the magnetizations related to all the patterns raise and the signals they convey share the same amplitude.
In general, in this type of neural network, the _pure state ansatz4_\(\mathbf{m}=(1,0,0,...,0)\), that is \(\sigma_{i}=\xi_{i}^{1}\) for \(i=1,...,N\), barely works and parallel retrieval is often favored. In fact, for \(K\geq 2\), at relatively low values of pattern dilution \(d_{1}\) and in the zero-noise limit \(\beta\to\infty\), one can prove the validity of the so-called _hierarchical ansatz_[2] as we briefly discuss: one pattern, say \(\mathbf{\xi}^{1}\), is perfectly retrieved and displays a Mattis magnetization \(m^{1}\approx(1-d)\); a fraction \(d\) of neurons is not involved and is therefore available for further retrieval, with any remaining pattern, say \(\mathbf{\xi}^{2}\), which yields \(m_{2}\sim(1-d)d\); proceeding iteratively, one finds \(m_{\ell}=d^{\ell-1}(1-d)\) for \(\ell=1,...,\hat{K}\) and the overall number \(\hat{K}\) of patterns
simultaneously retrieved corresponds to the employment of all the resources. Specifically, \(\hat{K}\) can be estimated by setting \(\sum_{\ell=0}^{\hat{K}-1}(1-d)d^{\ell}=1\), with the cut-off at finite \(N\) as \((1-d)d^{\hat{K}-1}\geq N^{-1}\), due to discreteness: for any fixed and finite \(d\), this implies \(K\lesssim\log N\), which can be thought of as a "parallel low-storage" regime of neural networks. It is worth stressing that, in the above mentioned regime of low dilution, the configuration leading to \(m_{\ell}=d^{\ell-1}(1-d)\) for \(\ell=1,...,\hat{K}\) is the one which minimizes the cost function. The hierarchical retrieval state \(\mathbf{m}=(1-d)\left(1,d,d^{2},d^{3},\cdots\right)\) can also be specified in terms of neural configuration as [2]
\[\sigma_{i}^{*}=\xi_{i}^{1}+\sum_{\nu=2}^{\hat{K}}\xi_{i}^{\nu}\prod_{\rho=1}^{ \nu-1}\delta_{\xi_{i}^{\rho},0}\,. \tag{6}\]
This organization is stable until a critical dilution level \(d_{c}\) is reached where \(m_{1}\sim\sum_{k>1}m_{k}\)[2], beyond that level the network undergoes a rearrangement and a new organization called _parallel ansatz_ supplants the previous one. Indeed for high values of dilution (i.e \(d\to 1\)) it is immediate to check that the ratio among the various intensities of all the magnetizations stabilizes to the value one, i.e. \((m_{k}/m_{k-1})\sim d^{k-1}(1-d)/d^{k-2}(1-d)\to 1\), hence in this regime all the magnetizations are raised with the same strength and the network is operationally set in a fully parallel retrieval mode: the parallel retrieval state simply reads \(\mathbf{m}=(\bar{m})\left(1,1,1,1,\cdots\right)\). This picture is confirmed by the plots shown in Fig. 1 and obtained by solving the self-consistency equations for the Mattis magnetizations related to the multitasking Hebbian network equipped with \(K=2\) patterns that read as [2]
\[m_{1} = d(1-d)\tanh(\beta m_{1})+\frac{(1-d)^{2}}{2}\left\{\tanh[\beta(m _{1}+m_{2})]+\tanh[\beta(m_{1}-m_{2})]\right\}, \tag{7}\] \[m_{2} = d(1-d)\tanh(\beta m_{1})+\frac{(1-d)^{2}}{2}\left\{\tanh[\beta(m _{1}+m_{2})]-\tanh[\beta(m_{1}-m_{2})]\right\} \tag{8}\]
where \(\beta\in\mathbb{R}^{+}\) denotes the level of noise.
We remark that these hierarchical or parallel organizations of the retrieval, beyond emerging naturally
Figure 1: Numerical solutions of the two self-consistent equations (7) and (8) obtained for \(K=2\), see [2], as a function of \(d\) and for different choices of \(\beta\): in the \(d\to 0\) limit the Hopfield serial retrieval is recovered (one magnetization with intensity one and the other locked at zero), for \(d\to 1\) the network ends up in the parallel regime (where all the magnetizations acquire the same value), while for intermediate values of dilution the hierarchical ordering prevails (both the magnetizations are raised, but their amplitude is different).
within the equilibrium description provided by Statistical Mechanics, are actually the real stationary states of the dynamics of these networks at work with diluted patterns as shown in Figure 2.
### From parallel storing to parallel learning
In this section we revise the multitasking Hebbian network [2; 4] in such a way that it can undergo a _learning_ process instead of a simple _storing_ of patterns. In fact, in the typical learning setting, the set of definite patterns, hereafter promoted to play as "archetypes", to be reconstructed by the network is not available, rather, the network is exposed to examples, namely noisy versions of these archetypes.
As long as enough examples are provided to the network, this is expected to correctly form its own representation of the archetypes such that, in further expositions to a new example related to a certain archetype, it will be able to retrieve it and, since then, suitably generalize it.
This generalized Hebbian kernel has recently been introduced to encode unsupervised [5] and supervised [13] learning processes and, in the present paper, these learning rules are modified in order to deal with diluted patterns.
First, let us define the data-set these networks have to cope with: the archetypes are randomly drawn from the distribution (1). Each archetype \(\mathbf{\xi}^{\mu}\) is then used to generate a set of \(M_{\mu}\) perturbed
Figure 2: We report two examples of Monte Carlo dynamics until thermalization within the hierarchical (upper plots, dilution level \(d=0.2\)) and parallel (lower plots, dilution level \(d=0.8\)) scenarios respectively. These plots confirm that the picture provided by statistical mechanics is actually dynamically reached by the network. We initialize the network sharply in a pattern as a Cauchy condition (represented as the dotted blue Dirac delta peaked at the pattern in the second columns) and, in the first column, we show the stationary values of the various Mattis magnetizations pertaining to different patterns, while in the second column we report their histograms achieved by sampling 1000 independent Monte Carlo simulations: starting from a sequential retrieval regime, the network ends up in a multiple retrieval mode, hierarchical vs parallel depending on the level of dilution in the patterns.
versions, denoted as \(\mathbf{\eta}^{\mu,a}\) with \(a=1,...,M_{\mu}\) and \(\mathbf{\eta}^{\mu,a}\in\{-1,0,+1\}^{N}\). Thus, the overall set of examples to be supplied to the network is given by \(\mathbf{\eta}=\{\mathbf{\eta}^{\mu,a}\}_{\mu=1,...,K}^{a=1,...,M_{\mu}}\). Of course, different ways to sample examples are conceivable: for instance, one can require that the position of blank entries appearing in \(\mathbf{\xi}^{\mu}\) is preserved over all the examples \(\{\mathbf{\eta}^{\mu,a}\}_{a=1,...,M_{\mu}}\), or one can require that only the number of blank entries \(\sum_{i=1}^{N}\delta_{\xi_{i}^{\mu},0}\) is preserved (either strictly or in the average). Here we face the first case because it requires a simpler notation, but we refer to Appendix A for a more general treatment.
**Definition 3**.: _The entries of each examples are depicted following_
\[\mathbb{P}(\eta_{i}^{\mu,a}|\xi_{i}^{\mu})=\frac{1+r_{\mu}}{2}\delta_{\eta_{i }^{\mu,a},\xi_{i}^{\mu}}+\frac{1-r_{\mu}}{2}\delta_{\eta_{i}^{\mu,a},-\xi_{i}^ {\mu}}, \tag{9}\]
_for \(i=1,\ldots,N\) and \(\mu=1,\ldots,K\). Notice that \(r_{\mu}\) tunes the data-set quality: as \(r_{\mu}\to 1\) examples belonging to the \(\mu\)-th set collapse on the archetype \(\mathbf{\xi}^{\mu}\), while as \(r_{\mu}\to 0\) examples turn out to be uncorrelated with the related archetype \(\mathbf{\xi}^{\mu}\)._
As we will show in the next sections, the behavior of the system depends on the parameters \(M_{\mu}\) and \(r_{\mu}\) only through the combination \(\frac{1-r_{\mu}^{2}}{M_{\mu}r_{\mu}^{2}}\), therefore, as long as the ratio \(\frac{1-r_{\mu}^{2}}{M_{\mu}r_{\mu}^{2}}\) is \(\mu\)-independent, the theory shall not be affected by the specific choice of the archetype. Thus, for the sake of simplicity, hereafter we will consider \(r\) and \(M\) independent of \(\mu\) and we will pose \(\rho:=\frac{1-r^{2}}{Mr^{2}}\). Remarkably, \(\rho\) plays as an information-content control parameter [13]: to see this, let us focus on the \(\mu\)-th pattern and \(i\)-th digit, whose related block is \(\mathbf{\eta}_{i}^{\mu}=(\eta_{i}^{\mu},\eta_{i}^{\mu,2},\ldots,\eta_{i}^{\mu,M})\), the error probability for any single entry is \(\mathcal{P}(\xi_{i}^{\mu}\neq 0)\mathcal{P}(\eta_{i}^{\mu,a}\neq\xi_{i}^{\mu})=(1-d)( 1-r_{\mu})/2\) and, by applying the majority rule on the block, we get \(\mathcal{P}(\xi_{i}^{\mu}\neq 0)\mathcal{P}(\mathrm{sign}(\sum\limits_{a}\eta_{i}^{ \mu,a})\xi_{i}^{\mu}=-1)\underset{M\gg 1}{\approx}\frac{(1-d)}{2}\left[1-\mathrm{ erf}\left(1/\sqrt{2\rho}\right)\right]\) thus, by computing the conditional entropy \(H_{d}(\xi_{i}^{\mu}|\mathbf{\eta}_{i}^{\mu})\), that quantifies the amount of information needed to describe the original message \(\xi_{i}^{\mu}\) given the related block \(\mathbf{\eta}_{i}^{\mu}\), we get
\[H_{d}(\xi_{i}^{\mu}|\mathbf{\eta}_{i}^{\mu}) = -\left[\frac{1+d}{2}+\frac{1-d}{2}\mathrm{erf}\left(\frac{1}{ \sqrt{2\rho}}\right)\right]\ \log\left[\frac{1+d}{2}+\frac{1-d}{2}\mathrm{erf}\left(\frac{1}{ \sqrt{2\rho}}\right)\right]\] \[-\left[\frac{1-d}{2}-\frac{1-d}{2}\mathrm{erf}\left(\frac{1}{ \sqrt{2\rho}}\right)\right]\ \log\left[\frac{1-d}{2}-\frac{1-d}{2}\mathrm{erf}\left(\frac{1}{ \sqrt{2\rho}}\right)\right]\]
which is monotonically increasing with \(\rho\). Therefore, with a slight abuse of language, in the following \(\rho\) shall be referred to as _data-set entropy_.
The available information is allocated directly in the synaptic coupling among neurons (as in the standard Hebbian storing), as specified by the following supervised and unsupervised generalization of the multitasking Hebbian network:
**Definition 4**.: _Given \(N\) binary neurons \(\sigma_{i}=\pm 1\), with \(i\in(1,...,N)\), the cost function (or Hamiltonian) of the multitasking Hebbian neural network in the supervised regime is_
\[\mathcal{H}_{N,K,d,M,r}^{(sup)}(\mathbf{\sigma}|\mathbf{\eta})=-\frac{1}{2N}\frac{1}{ (1-d)(1+\rho)}\sum_{\mu=1}^{K}\sum_{i,j=1}^{N,N}\left(\frac{1}{Mr}\sum_{a=1}^{ M}\eta_{i}^{\mu,a}\right)\left(\frac{1}{Mr}\sum_{b=1}^{M}\eta_{j}^{\mu,b}\right) \sigma_{i}\sigma_{j}. \tag{11}\]
**Definition 5**.: _Given \(N\) binary neurons \(\sigma_{i}=\pm 1\), with \(i\in(1,...,N)\), the cost function (or Hamiltonian) of the multitasking Hebbian neural network in the unsupervised regime is_
\[\mathcal{H}_{N,K,d,M,r}^{(unsup)}(\mathbf{\sigma}|\mathbf{\eta})=-\frac{1}{2N}\frac{1}{ (1-d)(1+\rho)}\sum_{\mu=1}^{K}\sum_{i,j=1}^{N,N}\left(\frac{1}{Mr^{2}}\sum_{a =1}^{M}\eta_{i}^{\mu,a}\eta_{j}^{\mu,a}\right)\sigma_{i}\sigma_{j}. \tag{12}\]
**Remark 1**.: _The factor \((1-d)(1+\rho)\) appearing in (2.11) corresponds to \(\mathbb{E}_{\xi},\mathbb{E}_{(\eta|\xi)}\left[\sum\limits_{a}\eta_{i}^{\mu,a}/( Mr)\right]^{2}\) and it plays as a normalization factor. A similar factor is also inserted in (2.12)._
**Remark 2**.: _By direct comparison between (2.11) and (2.12), the role of the "teacher" in the supervised setting is evident: in the unsupervised scenario, the network has to handle all the available examples regardless of their archetype label, while in the supervised counterpart a teacher has previously grouped examples belonging to the same archetype together (whence the double sum on \(a=(1,...,M)\) and on \(b=(1,...,M)\) appearing in eq. (2.11), that is missing in eq. (2.12))._
We investigate the model within a canonical framework: we introduce the Boltzmann-Gibbs measure
\[\mathcal{P}^{(sup,unsup)}_{N,K,\beta,d,M,r}(\mathbf{\sigma}|\mathbf{\eta}):=\frac{ \exp[-\beta\mathcal{H}^{(sup,unsup)}_{N,K,d,M,r}(\mathbf{\sigma}|\mathbf{\eta})]}{ \mathcal{Z}^{(sup,unsup)}_{N,K,\beta,d,M,r}(\mathbf{\eta})}, \tag{2.13}\]
where
\[\mathcal{Z}^{(sup,unsup)}_{N,K,\beta,d,M,r}(\mathbf{\eta}):=\sum_{\mathbf{\sigma}} \exp\left[-\beta\mathcal{H}^{(sup,unsup)}_{N,K,d,M,r}(\mathbf{\sigma}|\mathbf{\eta})\right] \tag{2.14}\]
is the normalization factor, also referred to as partition function, and the parameter \(\beta\in\mathbb{R}^{+}\), rules the broadness of the distribution in such a way that for \(\beta\to 0\) (infinite noise limit) all the \(2^{N}\) neural configurations are equally likely, while for \(\beta\to\infty\) the distribution is delta-peaked at the configurations corresponding to the minima of the Cost function.
The average performed over the Boltzmann-Gibbs measure is denoted as
\[\omega^{(sup,unsup)}_{N,K,\beta,d,M,r}[\cdot]:=\sum_{\mathbf{\sigma}}^{2^{N}}\, \cdot\,\mathcal{P}^{(sup,unsup)}_{N,K,\beta,d,M,r}(\mathbf{\sigma}|\mathbf{\eta}). \tag{2.15}\]
Beyond this average, we shall also take the so-called _quenched_ average, that is the average over the realizations of archetypes and examples, namely over the distributions (2.1) and (2.9), and this is denoted as
\[\mathbb{E}[\cdot]=\mathbb{E}_{\xi}\mathbb{E}_{(\eta|\xi)}[\cdot]. \tag{2.16}\]
**Definition 6**.: _The quenched statistical pressure of the network at finite network size \(N\) reads as_
\[\mathcal{A}^{(sup,unsup)}_{N,K,\beta,d,M,r}=\frac{1}{N}\mathbb{E}\log\mathcal{ Z}^{(sup,unsup)}_{N,K,\beta,d,M,r}(\mathbf{\eta}). \tag{2.17}\]
_In the thermodynamic limit we pose_
\[\mathcal{A}^{(sup,unsup)}_{K,\beta,d,M,r}=\lim_{N\to\infty}\mathcal{A}^{(sup,unsup)}_{N,K,\beta,d,M,r}. \tag{2.18}\]
_We recall that the statistical pressure equals the free energy times \(-\beta\) (hence they convey the same information content)._
**Definition 7**.: _The network capabilities can be quantified by introducing the following order parameters, for \(\mu=1,\ldots,K\),_
\[m_{\mu} :=\frac{1}{N}\sum_{i=1}^{N}\xi_{i}^{\mu}\sigma_{i},\] \[n_{\mu,a} :=\frac{1}{(1+\rho)r}\frac{1}{N}\sum_{i=1}^{N}\eta_{i}^{\mu,a} \sigma_{i}, \tag{2.19}\] \[n_{\mu} :=\frac{1}{M}\sum_{a=1}^{M}n_{\mu,a}=\frac{1}{(1+\rho)r}\frac{1}{ NM}\sum_{i,a=1}^{N,M}\eta_{i}^{\mu,a}\sigma_{i},\]
We stress that, beyond the fairly standard \(K\) Mattis magnetizations \(m_{\mu}\), which assess the alignment of the neural configuration \(\mathbf{\sigma}\) with the archetype \(\mathbf{\xi}^{\mu}\), we need to introduce also \(K\) empirical Mattis magnetizations \(n_{\mu}\), which compare the alignment of the neural configuration with the average of the examples labelled with \(\mu\), as well as \(K\times M\) single-example Mattis magnetizations \(n_{\mu,a}\), which measure the proximity between the neural configuration and a specific example. An intuitive way to see the suitability of the \(n_{\mu}\)'s and of the \(n_{\mu,a}\)'s is by noticing that the cost functions \(\mathcal{H}^{(sup)}\) and \(\mathcal{H}^{(unsup)}\) can be written as a quadratic form in, respectively, \(n_{\mu}\) and \(n_{\mu,a}\); on the other hand, the \(m_{\mu}\)'s do not appear therein explicitly as the archetypes are unknowns in principle.
Finally, notice that no spin-glass order parameters is needed here (since we are working in the low-storage regime [8; 25]).
## 3 Parallel Learning: the picture by statistical mechanics
### Study of the Cost function and its related Statistical Pressure
To inspect the emergent capabilities of these networks, we need to estimate the order parameters introduced in Equations (19) and analyze their behavior versus the control parameters \(K,\beta,d,M,r\). To this task we need an explicit expression of the statistical pressure in terms of these order parameters so to extremize the former over the latter. In this Section we carry on this investigation in the thermodynamic limit and in the low storage scenario by relying upon Guerra's interpolating techniques (see e.g., [30; 17; 31; 32]): the underlying idea is to introduce an interpolating statistical pressure whose extrema are the original model (which is the target of our investigation but we can be not able to address it directly) and a simple one (which is usually a one-body model that we can solve exactly). We then start by evaluating the solution of the latter and next we propagate the obtained solution back to the original model by the fundamental theorem of calculus, integrating on the interpolating variable. Usually, in this last passage, one assumes replica symmetry, namely that the order-parameter fluctuations are negligible in the thermodynamic limit as this makes the integral propagating the solution analytical. In the low-load scenario replica symmetry holds exactly, making the following calculation rigorous. In fact, as long as \(K/N\to 0\) while \(N\to\infty\), the order parameters self-average around their means [19; 46], that will be denoted by a bar, that is
\[\lim_{N\to\infty}\mathcal{P}_{N,K,\beta,d,M,r}(m_{\mu}) = \delta\left(m_{\mu}-\bar{m}_{\mu}\right),\quad\forall\mu\in(1,...,K), \tag{22}\] \[\lim_{N\to\infty}\mathcal{P}_{N,K,\beta,d,M,r}(n_{\mu}) = \delta\left(n_{\mu}-\bar{n}_{\mu}\right),\quad\forall\mu\in(1,...,K), \tag{23}\]
where \(\mathcal{P}_{N,K,\beta,d,M,r}\) denotes the Boltzmann-Gibbs probability distribution for the observables considered. We anticipate that the mean values of these distributions are independent of the training (either supervised or unsupervised) underlying the Hebbian kernel.
Before proceeding, we slightly revise the partition functions (14) by inserting an extra term in their exponents because it allows to apply the functional generator technique to evaluate the Mattis magnetizations. This implies the following modification, respectively in the supervised and unsupervised settings, of the partition function
**Definition 8**.: _Given the interpolating parameter \(t\in[0,1]\), the auxiliary field \(J\) and the constants \(\{\psi_{\mu}\}_{\mu=1,...,K}\in\mathbb{R}\) to be set a posteriori, Guerra's interpolating partition function for the supervised
_and unsupervised multitasking Hebbian networks is given, respectively, by_
\[\mathcal{Z}^{(sup)}_{N,K,\beta,d,M,r}(\mathbf{\eta};J,t)=\sum_{\{\mathbf{ \sigma}\}}\int\,d\mu(z_{\mu})\exp\Bigg{[}J\sum_{\mu,i}\xi_{i}^{\mu}\sigma_{i}+ \frac{t\beta N(1+\rho)}{2(1-d)}\sum_{\mu}n_{\mu}^{2}(\mathbf{\sigma})+(1-t)\frac{N }{2}\sum_{\mu}\psi_{\mu}\,n_{\mu}(\mathbf{\sigma})\Bigg{]}. \tag{10}\] \[\mathcal{Z}^{(unsup)}_{N,K,\beta,d,M,r}(\mathbf{\eta};J,t)=\sum_{\{ \mathbf{\sigma}\}}\int\,d\mu(z_{\mu})\exp\Bigg{[}J\sum_{\mu,i}\xi_{i}^{\mu}\sigma_{ i}+\frac{t\beta N(1+\rho)}{2(1-d)M}\sum_{\mu=1}^{K}\sum_{a=1}^{M}n_{\mu,a}^{2}( \mathbf{\sigma})+(1-t)N\sum_{\mu,a}\psi_{\mu}\,n_{\mu,a}(\mathbf{\sigma})\Bigg{]}. \tag{11}\]
More precisely, we added the term \(J\sum_{\mu}\sum_{i}\xi_{i}^{\mu}\sigma_{i}\) that allows us to to "generate" the expectation of the Mattis magnetization \(m_{\mu}\) by evaluating the derivative w.r.t. \(J\) of the quenched statistical pressure at \(J=0\). This operation is not necessary for _Hebbian storing_, where the Mattis magnetization is a natural order parameter (the Hopfield Hamiltonian can be written as a quadratic form in \(m_{\mu}\), as standard in AGS theory [8]), while for _Hebbian learning_ (whose cost function can be written as a quadratic form in \(n_{\mu}\), not in \(m_{\mu}\) as the network does not experience directly the archetypes) we need such a term for otherwise the expectation of the Mattis magnetization would not be accessible. This operation gets redundant in the \(M\to\infty\) limit, where \(m_{\mu}\) and \(n_{\mu}\) become proportional by a standard Central Limit Theorem (CLT) argument (see also Sec. 3.1.1 and [13]). Clearly, \(\mathcal{Z}^{(sup,unsup)}_{N,K,\beta,d,M,r}(\mathbf{\eta})=\lim_{J\to 0} \mathcal{Z}^{(sup,unsup)}_{N,K,\beta,d,M,r}(\mathbf{\eta};J)\) and these generalized interpolating partition functions, provided in eq.s (10) and (11) respectively, recover the original models when \(t=1\), while they return a simple one-body model at \(t=0\).
Conversely, the role of the \(\psi_{\mu}\)'s is instead that of mimicking, as close as possible, the true post-synaptic field perceived by the neurons.
These partition functions can be used to define a generalized measure and a generalized Boltzmann-Gibbs average that we indicate by \(\omega_{t}^{(sup,unsup)}[\cdot]\). Of course, when \(t=1\) the standard Boltzmann-Gibbs measure and related averages are recovered.
Analogously, we can also introduce a generalized interpolating quenched statistical pressures as
**Definition 9**.: _The interpolating statistical pressure for the multitasking Hebbian neural network is introduced as_
\[\mathcal{A}^{(sup,unsup)}_{N,K\beta,d,M,r}(J,t)\coloneqq\frac{1}{N}\mathbb{E }\left[\ln\mathcal{Z}^{(sup,unsup)}_{N,K,\beta,d,M,r}(\mathbf{\eta};J,t)\right], \tag{12}\]
_and, in the thermodynamic limit,_
\[\mathcal{A}^{(sup,unsup)}_{K,\beta,d,M,r}(J,t)\coloneqq\lim_{N\to\infty} \mathcal{A}^{(sup,unsup)}_{N,K,\beta,d,M,r}(J,t). \tag{13}\]
_Obviously, by setting \(t=1\) in the interpolating pressures we recover the original ones, namely \(\mathcal{A}^{(sup,unsup)}_{K,\beta,d,M,r}(J)=\mathcal{A}^{(sup,unsup)}_{K, \beta,d,M,r}(J,t=1)\), which we finally evaluate at \(J=0\)._
We are now ready to state the next
**Theorem 1**.: _In the thermodynamic limit (\(N\to\infty\)) and in the low-storage regime (\(K/N\to 0\)), the quenched statistical pressure of the multitasking Hebbian network - trained under supervised or unsupervised learning - reads as_
\[\mathcal{A}^{(sup,unsup)}_{K,\beta,d,M,r}(J) = \mathbb{E}\left\{\ln\left[2\cosh\left(J\sum_{\mu=1}^{K}\xi^{\mu} +\frac{\beta}{1-d}\sum_{\mu=1}^{K}\bar{n}_{\mu}\tilde{\eta}^{\mu}\right)\, \right]\right\}-\frac{\beta}{1-d}(1+\rho)\sum_{\mu=1}^{K}\bar{n}_{\mu}^{2}. \tag{14}\]
_where \(\mathbb{E}=\mathbb{E}_{\xi}\mathbb{E}_{(\eta|\xi)}\), \(\hat{\eta}^{\mu}=\frac{1}{Mr}\sum_{a=1}^{M}\eta_{i}^{\mu,a}\), and the values \(\bar{n}_{\mu}\) must fulfill the following self-consistent equations_
\[\bar{n}_{\mu}=\frac{1}{(1+\rho)}\mathbb{E}\left\{\tanh\left[\frac{\beta}{(1-d) }\sum_{\nu=1}^{K}\bar{n}_{\nu}\hat{\eta}^{\nu}\right]\hat{\eta}^{\mu}\right\},\quad\forall\mu\in(1,...,K), \tag{3.8}\]
_as these values of the order parameters are extremal for the statistical pressure \(\mathcal{A}_{K,\beta,d,M,r}^{(sup,unsup)}(J=0)\)._
**Corollary 1**.: _By considering the auxiliary field \(J\) coupled to \(m_{\mu}\) and recalling that \(\lim_{N\to\infty}m_{\mu}=\bar{m}_{\mu}\), we can write down a self-consistent equation also for the Mattis magnetization as \(\bar{m}_{\mu}=\partial_{J}\mathcal{A}_{K,\beta,d,M,r}^{(sup,unsup)}(J)_{|J=0}\), thus we have_
\[\bar{m}_{\mu}=\mathbb{E}\left\{\tanh\left[\frac{\beta}{(1-d)}\sum_{\nu=1}^{K }\bar{n}_{\nu}\hat{\eta}^{\nu}\right]\xi^{\mu}\right\},\quad\forall\mu\in(1,...,K). \tag{3.9}\]
For the proof of Proposition 1 and of Corollary 1 we refer to Appendix E.1.
We highlighted that the expressions of the quenched statistical pressure for a network trained with or without the supervision of a teacher do actually coincide: intuitively, this happens because we are considering only a few archetypes (i.e. we work at low load), consequently, the minima of the cost function are well separated and there is only a negligible role of the teacher in shaping the landscape to avoid overlaps in their basins of attractions. Clearly, this is expected to be no longer true in the high load setting and, indeed, it is proven not to hold for non-diluted patterns, where supervised and unsupervised protocols give rise to different outcomes [5, 13]. By a mathematical perspective the fact that, whatever the learning procedure, the expression of the quenched statistical pressure is always the same, is a consequence of standard concentration of measure arguments [17, 45] as, in the \(N\to\infty\) limit, beyond eq. (3.2), it is also \(\mathcal{P}(n_{\mu,a})\to\delta(n_{\mu,a}-\bar{n}_{\mu})\).
The self-consistent equations (3.9) have been solved numerically for several values of parameters and results for K=2 and for \(K=3\) are shown in Fig. 3 (where also the values of the cost function is reported) and Fig. 4 respectively. We also checked the validity of these results by comparing them with the outcomes of Monte Carlo simulations, finding an excellent asymptotic agreement; further, in the large \(M\) limit, the magnetizations eventually converge to the values predicted by the theory developed in the storing framework, see eq. (2.6). Therefore, in both the scenarios, the hierarchical or parallel organization of the magnetization's amplitudes are recovered: beyond the numerical evidence just mentioned, in Appendix D an analytical proof is provided.
#### 3.1.1 Low-entropy data-sets: the Big-Data limit
As discussed in Sec. 2.2, the parameter \(\rho\) quantifies the amount of information needed to describe the original message \(\mathbf{\xi}^{\mu}\) given the set of related examples \(\{\mathbf{\eta}^{\mu,a}\}_{a=1,...,M}\). In this section we focus on the case \(\rho\ll 1\) that corresponds to a highly-informative data-set; we recall that in the limit \(\rho\to 0\) we get a data-set where either the items (\(r\to 1\)) or their empirical average (\(M\to\infty\), \(r\) finite) coincide with the archetypes, in such a way that the theory collapses to the standard Hopfield reference.
As explained in Appendix E.2, we start from the self-consistent equations (3.8)-(3.9) and we exploit the Central Limit Theorem to write \(\hat{\eta}^{\mu}\sim\xi^{\mu}\left(1+\lambda_{\mu}\sqrt{\rho}\right)\), where \(\lambda_{\mu}\sim\mathcal{N}(0,1)\). In this way we reach the simpler expressions given by the next
**Proposition 1**.: _In the low-entropy data-set scenario, preserving the low storage and thermodynamic limit assumptions, the two sets of order parameters of the theory, \(\bar{m}_{\mu}\) and \(\bar{n}_{\mu}\) become related by the
following equations_
\[\bar{n}_{\mu} = \frac{\bar{m}_{\mu}}{(1+\rho)}+\beta^{{}^{\prime}}\frac{\rho\,\bar{ n}_{\mu}}{(1+\rho)}\mathbb{E}_{\xi,Z}\left\{\left[1-\tanh^{2}\left(g(\beta,Z,\bar{ \boldsymbol{n}})\right)\right](\xi^{\mu})^{2}\right\}, \tag{3.10}\] \[\bar{m}_{\mu} = \mathbb{E}_{\xi,Z}\left\{\tanh\left[g(\beta,\boldsymbol{\xi},Z, \bar{\boldsymbol{n}})\right]\xi^{\mu}\right\}, \tag{3.11}\]
_where_
\[g(\beta,\boldsymbol{\xi},Z,\bar{\boldsymbol{n}})=\beta^{{}^{\prime}}\sum_{ \nu=1}^{K}\bar{n}_{\nu}\xi^{\nu}+\beta^{{}^{\prime}}\,Z\sqrt{\rho\,\sum_{\nu= 1}^{K}\bar{n}_{\nu}^{2}\left(\xi^{\nu}\right)^{2}} \tag{3.12}\]
_and \(Z\sim\mathcal{N}(0,1)\) is a standard Gaussian variable. Furthermore, to lighten the notation and assuming \(d\neq 1\) with no loss of generality, we posed_
\[\beta^{{}^{\prime}}=\frac{\beta}{1-d}. \tag{3.13}\]
The regime \(\rho\ll 1\), beyond being an interesting one (e.g., it can be seen as a _big data_\(M\to\infty\) limit of the theory), offers a crucial advantage because of the above emerging proportionality relation between \(\bar{n}\) and \(\bar{m}\) (see eq. 3.10). In fact, the model is supplied only with examples - upon which the \(n_{\mu}\)'s are defined - while it is not aware of archetypes - upon which the \(m_{\mu}\)'s are defined - yet we can
Figure 3: Snapshots of cost function (upper plots) -where we use the label \(E\) for _energy_- and magnetizations (lower plots) for data-sets generated by \(K=2\) archetypes and at different entropies, in the noiseless limit \(\beta\to\infty\). Starting by \(\rho=0.0\) we see that the hierarchical regime (black lines) dominates at relatively mild dilution values (i.e., the energy pertaining to this configuration is lower w.r.t. the parallel regime), while for \(d\to 1\) the hierarchical ordering naturally collapse to the parallel regime (red lines), where all the magnetizations acquire the same values. Further note how, by increasing the entropy in the data-set (e.g. for \(\rho=0.1\) and \(\rho=0.4\)), the domain of validity of the parallel regime enlarges (much as increasing \(\beta\) in the network, see Fig. 1). The vertical blue lines mark the transitions between these two regimes as captured by Statistical Mechanics: it corresponds to switching from the white to the green regions of the phase diagrams of Fig. 6.
use this relation to recast the self-consistent equation for \(\bar{n}\) into a self-consistent equation for \(\bar{m}\) such that its numerical solution in the space of the control parameters allows us to get the phase diagram of such a neural network more straightforwardly.
Further, we can find out explicitly the thresholds for learning, namely the minimal amount of examples (given the level of noise \(r\), the amount of archetype to handle \(K\), etc.) that guarantee that the network can safely infer the archetype from the supplied data-set. To obtain these thresholds we have to deepen the ground state structure of the network, that is, we now handle the Eqs. (20)-(21) to compute their zero fast-noise limit (\(\beta\to\infty\)). As detailed in the Appendix E.2 (see Corollary 3), by taking the limit \(\beta\to\infty\) in eqs. (20)-(21) we get
\[\bar{m}_{\mu}\,=\,\mathbb{E}_{\xi}\left\{\mathrm{erf}\left[\left(\sum_{\nu=1 }^{K}\bar{m}_{\nu}\xi^{\nu}\right)\left(2\rho\sum_{\nu=1}^{K}\bar{m}_{\nu}^{2} \left(\xi^{\nu}\right)^{2}\right)^{-1/2}\right]\xi^{\mu}\right\}\,. \tag{22}\]
Once reached a relatively simple expression for \(\bar{m}_{\mu}\), we can further manipulate it and try to get information about the existence of a lower-bound value for \(M\), denoted with \(M_{\otimes}\), which ensures that the network has been supplied with sufficient information to learn and retrieve the archetypes.
Figure 4: _Behaviour of the Mattis magnetizations as more and more examples are supplied to the network._ Monte Carlo numerical checks (colored dots, \(N=6000\)) for a diluted network with \(r=0.1\) and \(K=3\) are in plain agreement with the theory: solutions of the self-consistent equation for the Mattis magnetizations reported in the Corollary 1 are shown as solid lines. As dilution increases, the network behavior departs from a Hopfield-like retrieval (\(d=0.1\)) where just the blue magnetization is raised (serial pattern recognition) to the hierarchical regime (\(d=0.25\) and \(d=0.55\)) where multiple patterns are simultaneously retrieved with different amplitudes, while for higher values of dilution the network naturally evolves toward the parallel regime (\(d=0.75\)) where all the magnetizations are raised and with the same strength. Note also the asymptotic agreement with the dotted lines, whose values are those predicted by the multitasking Hebbian storage [2].
Setting \(\beta\to\infty\), we expect that the magnetizations fulfill the hierarchical organization, namely \((\bar{m}_{1},\bar{m}_{2},\ldots)=(1-d)(1,d,\ldots)\) and (3.14) becomes
\[\bar{m}_{\mu}\sim\frac{1-d}{2}\mathbb{E}_{\xi^{\nu\neq\mu}}\left\{\mathrm{erf} \left[\frac{d^{\mu}+\sum\limits_{\nu\neq\mu}^{K}d^{\nu}\xi^{\nu}}{\sqrt{2\rho} \sqrt{d^{2\mu}+\sum\limits_{\nu\neq\mu}^{K}d^{2\nu}\left(\xi^{\nu}\right)^{2}} }\right]+\mathrm{erf}\left[\frac{d^{\mu}-\sum\limits_{\nu\neq\mu}^{K}d^{\nu} \xi^{\nu}}{\sqrt{2\rho}\sqrt{d^{2\mu}+\sum\limits_{\nu\neq\mu}^{K}d^{2\nu} \left(\xi^{\nu}\right)^{2}}}\right]\right\}\,, \tag{3.15}\]
where we highlighted that the expectation is over all the archetypes but the \(\mu\)-th one under inspection.
Next, we introduce a confidence interval, ruled by \(\Theta\), and we require that
\[\bar{m}_{\mu}>(1-d)d^{\mu-1}\mathrm{erf}\left[\Theta\right]. \tag{3.16}\]
In order to quantify the critical number of examples \(M_{\otimes}^{\mu}\) needed for a successful learning of the archetype \(\mu\) we can exploit the relation
\[\mathbb{E}_{\xi^{\nu\neq\mu}}\Big{\{}\mathrm{erf}\Big{[}f(\xi)\Big{]}\Big{\}} \geq\min_{\xi^{\nu\neq\mu}}\!\Big{\{}\mathrm{erf}\Big{[}f(\xi)\Big{]}\Big{\}}\,, \tag{3.17}\]
where in our case
\[\begin{split}\min_{\xi^{\nu\neq\mu}}\!\Big{\{}\mathrm{erf} \Big{[}f(\xi)\Big{]}\Big{\}}&=\min_{\xi^{\nu\neq\mu}}\left\{ \mathrm{erf}\left[\frac{d^{\mu}+\sum\limits_{\nu\neq\mu}^{K}d^{\nu}\xi^{\nu}}{ \sqrt{2\rho}\sqrt{d^{2\mu}+\sum\limits_{\nu\neq\mu}^{K}d^{2\nu}\left(\xi^{\nu }\right)^{2}}}\right]+\mathrm{erf}\left[\frac{d^{\mu}-\sum\limits_{\nu\neq\mu}^{ K}d^{\nu}\xi^{\nu}}{\sqrt{2\rho}\sqrt{d^{2\mu}+\sum\limits_{\nu\neq\mu}^{K}d^{2\nu} \left(\xi^{\nu}\right)^{2}}}\right]\right\}\\ &=\,2\,\mathrm{erf}\left[\left(d^{\mu}-\sum\limits_{\nu\neq\mu}^{ K}d^{\nu}\right)\left(2\rho\sum\limits_{\nu=1}^{K}d^{2\nu}\right)^{-1/2}\right]. \end{split} \tag{3.18}\]
Thus, using the previous relation in (3.16), the following inequality must hold
\[\mathrm{erf}\left[\left(d^{\mu}-\sum\limits_{\nu\neq\mu}^{K}d^{\nu}\right) \left(2\rho\sum\limits_{\nu=1}^{K}d^{2\nu}\right)^{-1/2}\right]=\mathrm{erf} \left[\sqrt{\frac{1+d}{2\rho(1-d)}}\frac{2d^{\mu-1}-1-2d^{\mu}+d^{K}}{\sqrt{1- d^{2K}}}\right]>d^{\mu-1}\mathrm{erf}\left[\Theta\right] \tag{3.19}\]
and we can write the next
**Proposition 2**.: _In the noiseless limit \(\beta\to\infty\), the critical threshold for learning \(M_{\otimes}\) (in the number of required examples) depends on the data-set noise \(r\), the dilution \(d\), the amount of archetypes to handle \(K\) (and of course on the amplitude of the chosen confidence interval \(\Theta\)) and reads as_
\[M_{\otimes}^{\mu}(\Theta,r,d,K)>2\left(\mathrm{erf}^{-1}\left[d^{\mu-1} \mathrm{erf}\left[\Theta\right]\right]\right)^{2}\left(\frac{1-r^{2}}{r^{2}} \right)\frac{(1-d)(1-d^{2K})}{(1+d)(2d^{\mu-1}-1-2d^{\mu}+d^{K})^{2}} \tag{3.20}\]
_and in the plots (see Figure 5) we use \(\Theta=1/\sqrt{2}\) as this choice corresponds to the fairly standard condition \(\mathbb{E}_{\xi}\mathbb{E}_{(\eta|\xi)}[\xi_{i}^{1}h_{i}^{(1)}(\boldsymbol{ \xi}^{1})]>\sqrt{\mathrm{Var}[\xi_{i}^{1}h_{i}^{(1)}(\boldsymbol{\xi}^{1})]}\) when \(\mu=1\)._
To quantify these thresholds for learning, in Fig. 5 we report the required number of examples to learn the first archetype (out of \(K=2,3,4,50\) as shown in the various panels) as a function of the dilution of the network.
#### 3.1.2 Ergodicity breaking: the critical phase transition
The main interest in the statistical mechanical approach to neural networks lies in inspecting their emerging capabilities, that typically appear once ergodicity gets broken: as a consequence, finding the boundaries of validity of ergodicity is a classical starting point to deepen these aspects.
To this task, hereafter we provide a systematic fluctuation analysis of the order parameters: the underlying idea is to check when, starting from the high noise limit (\(\beta\to 0\), where everything is uncorrelated and simple Probability arguments apply straightforwardly), these fluctuations diverge, as that defines the onset of ergodicity breaking as stated in the next
**Theorem 2**.: _The ergodic region, in the space of the control parameters \((\beta,d,\rho)\) is confined to the half-plane defined by the critical line_
\[\beta_{c}=\frac{1}{1-d}, \tag{3.21}\]
_whatever the entropy of the data-set \(\rho\)._
Proof.: The idea of the proof is the same we used so far, namely Guerra interpolation but on the rescaled fluctuations rather that directly on the statistical pressure.
Figure 5: We plot the logarithm of the critical number of example (required to raise the first magnetization) \(M_{\otimes}^{1}\) at different loads \(K=2,3,4,5\) and as a function of the dilution of the networks, for different noise values of the data-set (as shown in the legend). Note the divergent behavior of \(M_{\otimes}^{1}\) when approaching the critical dilution level \(d_{c}(K)=d_{1}\), as predicted by the parallel Hebbian storage limit [2; 4]: this is the crossover between the two multi-tasking regimes, hierarchical vs parallel, hence, solely at the value of dilution \(d_{1}\), there is no sharp behavior to infer and, correctly, the network can not accomplish learning. This shines by looking at (3.20) where the critical amount of examples to correctly infer the archetype is reported: its denominator reduces to \(1-2d+d^{K}\) and, for \(\mu=1\), it becomes zero when \(d\to d_{1}\).
The rescaled fluctuations \(\tilde{n}_{\nu}^{2}\) of the magnetizations are defined as
\[\tilde{n}_{\nu}=\sqrt{N}(n_{\nu}-\bar{n}_{\nu}). \tag{3.22}\]
We remind that the interpolating framework we are using, for \(t\in(0,1)\), is defined via
\[Z(t)=\sum_{\{\sigma\}}\exp\left[\frac{\beta}{2}tN(1+\rho)\sum_{\mu=1}^{K}n_{ \mu}^{2}+N(1-t)\beta(1+\rho)N_{\mu}n_{\mu}\right], \tag{3.23}\]
and it is a trivial exercise to show that, for any smooth function \(F(\sigma)\) the following relation holds:
\[\frac{d\langle F\rangle}{dt}=\frac{\beta}{2}(1+\rho)\left(\langle F\sum_{\nu} \tilde{n}_{\nu}^{2}\rangle-\langle F\rangle\langle\sum_{\nu}\tilde{n}_{\nu}^{ 2}\rangle\right), \tag{3.24}\]
such that by choosing \(F=\tilde{n}_{\mu}^{2}\) we can write
\[\begin{split}\frac{d\langle\tilde{n}_{\mu}^{2}\rangle}{dt}& =\frac{\beta}{2}(1+\rho)\left(\langle\tilde{n}_{\mu}^{2}\sum_{\nu} \tilde{n}_{\nu}^{2}\rangle-\langle\tilde{n}_{\mu}^{2}\rangle\langle\sum_{\nu }\tilde{n}_{\nu}^{2}\rangle\right)\\ &=\frac{\beta}{2}(1+\rho)\left(\langle\tilde{n}_{\mu}^{4}\rangle+ \langle\bar{n}_{\mu}^{2}\sum_{\nu\neq\mu}\tilde{n}_{\nu}^{2}\rangle-\langle \tilde{n}_{\mu}^{2}\rangle^{2}-\langle\tilde{n}_{\mu}^{2}\rangle\langle\sum_{ \nu\neq\mu}\tilde{n}_{\nu}^{2}\rangle\right)\\ &=\beta(1+\rho)\langle\tilde{n}_{\mu}^{2}\rangle^{2}\end{split} \tag{3.25}\]
thus we have
\[\langle\tilde{n}_{\mu}^{2}\rangle_{t}=\frac{\langle\tilde{n}_{\mu}^{2}\rangle _{t=0}}{1-t\beta(1+\rho)\langle\tilde{n}_{\mu}^{2}\rangle_{t=0}} \tag{3.26}\]
where the Cauchy condition \(\langle\tilde{n}_{\mu}^{2}\rangle_{t=0}\) reads
\[\begin{split}\langle\tilde{n}_{\mu}^{2}\rangle_{t=0}& =\,N\mathbb{E}_{\xi}\mathbb{E}_{(\eta|\xi)}\frac{\sum_{\{\sigma\}} \left(\frac{1}{N^{2}(1+\rho)^{2}}\sum_{i,j}\tilde{\eta}_{i}^{\mu}\tilde{\eta}_ {j}^{\mu}\sigma_{i}\sigma_{j}+\tilde{n}_{\mu}^{2}-2\frac{1}{N(1+\rho)}\sum_{i }\hat{\eta}_{i}^{\mu}\sigma_{i}\tilde{n}_{\mu}\right)\exp\left[\beta\sum_{\nu }\tilde{n}_{\nu}\sum_{i}\tilde{\eta}_{i}^{\nu}\sigma_{i}\right]}{\sum_{\{ \sigma\}}\exp\left[\beta\sum_{\nu}\tilde{n}_{\nu}\sum_{i}\tilde{\eta}_{i}^{ \nu}\sigma_{i}\right]}\\ &=\,\frac{1-d}{(1+\rho)}-N_{\mu}^{2}.\end{split} \tag{3.27}\]
Evaluating \(\langle\tilde{n}_{\mu}^{2}\rangle_{t}\) for \(t=1\), that is when the interpolation scheme collapses to the Statistical Mechanics, we finally get
\[\langle\tilde{n}_{\mu}^{2}\rangle_{t=1}=\frac{1-d-(1+\rho)N_{\mu}^{2}}{[1- \beta\left(1-d-(1+\rho)N_{\mu}^{2}\right)]} \tag{3.28}\]
namely the rescaled fluctuations are described by a meromorphic function whose pole is
\[\beta=\frac{1}{\left(1-d-(1+\rho)N_{\mu}^{2}\right)}\xrightarrow{N_{\mu}=0} \beta_{{}_{C}}=\frac{1}{1-d}, \tag{3.29}\]
that is the critical line reported in the statement of the theorem.
### Stability analysis via standard Hessian: the phase diagram
The set of solutions for the self-consistent equations for the order parameters (3.10) describes as candidate solutions a plethora of states whose stability must be investigated to understand which solution is preferred as the control parameters are made to vary: this procedure results in picturing the phase diagrams of the network, namely plots in the space of the control parameters where different regions pertain to different macroscopic computational capabilities.
Remembering that \(A_{K,\beta,d,M,r}(\bar{\mathbf{n}})=-\beta f_{K,\beta,d,M,r}(\bar{\mathbf{n}})\) (where \(f_{K,\beta,d,M,r}(\bar{\mathbf{n}})\) is the free energy of the model), in order to evaluate the stability of these solutions, we need to check the sign of the second derivatives of the free energy. More precisely, we need to build up the Hessian, a matrix \(\mathbf{A}\) whose elements are
\[\frac{\partial^{2}f(\bar{\mathbf{n}})}{\partial n^{\mu}\partial n^{\nu}}=A^{\mu\nu}\,. \tag{3.30}\]
Then, we evaluate and diagonalize \(\mathbf{A}\) at a point \(\tilde{\mathbf{n}}\), representing a particular solution of the self-consistency equation (3.10): the numerical results are reported in the phase diagrams provided in Fig.6.
We find straightforwardly
\[A^{\mu\nu}=(1+\rho)\left[[1-\beta(1-d)]+\rho\beta\mathbb{E}\left\{\mathcal{T} _{K\beta,\rho}^{2}(\bar{\mathbf{n}},z)(\xi^{\mu})^{2}\right\}\right]\delta^{\mu \nu}+Q^{\mu\nu} \tag{3.31}\]
where we set \(\mathcal{T}_{K\beta,\rho}(\bar{\mathbf{n}},z)=\tanh\left(\beta\sum_{\lambda=1}^{K} \bar{n}_{\lambda}\xi^{\lambda}+z\beta\sqrt{\rho\sum_{\lambda=1}^{K}(\bar{n}_{ \lambda}\xi^{\lambda})^{2}}\right)\) and
\[\begin{array}{rl}Q^{\mu\nu}&=\,\beta\mathbb{E}\left\{\left[\mathcal{T}_{K \beta,\rho}^{2}(\bar{\mathbf{n}},z)\right]\xi^{\mu}\xi^{\nu}\right\}(1-\delta^{\mu \nu})+2\rho\beta^{2}\mathbb{E}\left\{\left[\mathcal{T}_{K\beta,\rho}(\bar{\mathbf{ n}},z)\right]\left[1-\mathcal{T}_{K\beta,\rho}^{2}(\bar{\mathbf{n}},z)\right]\left[ \bar{n}_{\nu}\xi^{\nu}+\bar{n}_{\mu}\xi^{\mu}\right]\xi^{\mu}\xi^{\nu}\right\} \\ &\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
where \({\cal T}=\tanh\left[\beta\bar{n}\xi^{\mu}(1+z\sqrt{\rho})\right]\). It is easy to check that \(A\) becomes diagonal, with
\[A^{\mu\mu} = (1+\rho)\Big{[}1-\beta(1-d)+\beta(1-d)\,\mathbb{E}\left\{{\cal T}^{ 2}\right\}\Big{]}\] \[+4\beta^{2}\rho\bar{n}(1-d)\mathbb{E}\left\{{\cal T}\Big{[}1-{ \cal T}^{2}\Big{]}\right\}+2\beta^{3}\rho^{2}\bar{n}^{2}(1-d)\mathbb{E}\left\{ \Big{[}1-3{\cal T}^{2}\Big{]}\Big{[}1-{\cal T}^{2}\Big{]}\right\}\,,\] \[A^{\nu\nu\neq\mu} = (1+\rho)\Big{[}1-\beta(1-d)+\beta(1-d)^{2}\,\mathbb{E}\left\{{ \cal T}^{2}\right\}\Big{]}\,.\]
Notice that these eigenvalues do not depend on \(K\) since \({\cal T}\) does not depend on \(K\). Requiring the positivity for all the eigenvalues, we get the region in the plane \((d,\beta^{-1})\), where the pure state is stable: this correspond the blue region in the phase diagrams reported in Fig. 6.
We stress that these pure state solutions, namely the standard Hopfield-type ones, in the ground state (\(\beta^{-1}\to 0\)) are never stable whenever \(d\neq 0\) as the multi-tasking setting prevails. Solely at positive
Figure 6: Phase diagram in the dilution-noise (\(d-\beta^{-1}\)) plane for different values of \(K\) and \(\rho\). We highlight that different regions –marked with different colors– represent different operational behavior of the network: in yellow the ergodic solution, in light-blue the pure state solution (that is, solely one magnetization different from zero), in white the hierarchical regime (that is, several magnetizations differ from zero and they all assume different values) and in light-green the parallel regime (several magnetization differ from zero but their amplitude is the same for all).
values of \(\beta\), this single-pattern retrieval state is possible as the role of the noise is to destabilize the weakest magnetization of the hierarchical displacement, _vide infra_).
#### 3.2.3 Parallel state: \(\bar{\boldsymbol{n}}=\bar{n}_{d,\rho,\beta}(1,\ldots,1)\)
In this case the structure of the solution has the form of a symmetric mixture state corresponding to the unique self consistency equation for all \(\mu=1,\ldots K\), namely
\[\bar{n}\,=\,\frac{\mathbb{E}_{\xi,Z}\left\{\tanh\left[g(\beta,\boldsymbol{\xi},Z,\bar{n})\right]\xi^{\mu}\right\}}{(1+\rho)}+\beta\frac{\rho\,\bar{n}}{(1+ \rho)}\mathbb{E}_{\xi,Z}\left\{\left[1-\tanh^{2}\left(g(\beta,\boldsymbol{\xi} Z,\bar{n})\right)\right](\xi^{\mu})^{2}\right\}, \tag{3.37}\]
where
\[g(\beta,\boldsymbol{\xi},Z,\bar{n})=\beta\bar{n}\left[\sum_{\lambda=1}^{K}\xi^ {\lambda}+\beta\,Z\sqrt{\rho\,\sum_{\lambda=1}^{K}\left(\xi^{\lambda}\right)^ {2}}\right]. \tag{3.38}\]
In this case, the diagonal terms of \(\boldsymbol{A}\) are
\[\begin{array}{rcl}a=A^{\mu\mu}\,=&\Big{[}1-\beta(1-d)+\beta\,\mathbb{E}\left \{\left[\mathcal{T}^{2}\right](\xi^{\mu})^{2}\right\}\Big{]}(1+\rho)\\ \\ &&+4\beta^{2}\rho\bar{n}\mathbb{E}\left\{\mathcal{T}\Big{[}1-\mathcal{T}^{2} \Big{]}\xi^{\mu}\right\}+2\beta^{3}\rho^{2}\bar{n}^{2}\mathbb{E}\left\{\Big{[} 1-3\mathcal{T}^{2}\Big{]}\Big{[}1-\mathcal{T}^{2}\Big{]}(\xi^{\mu})^{2} \right\}\,,\end{array} \tag{3.39}\]
instead the off-diagonal ones are
\[b=A^{\mu\nu\neq\mu}\,=\,\beta\mathbb{E}\left\{\left[\mathcal{T}^{2}\right] \xi^{\mu}\xi^{\nu}\right\}+2\rho^{2}\beta^{3}\bar{n}^{2}\mathbb{E}\left\{\left[ 1-3\mathcal{T}^{2}\right]\left[1-\mathcal{T}^{2}\right](\xi^{\mu}\xi^{\nu})^{ 2}\right\}\,. \tag{3.40}\]
In general the following relationship holds
\[\boldsymbol{A}=\begin{pmatrix}a&b&\cdots&b&b\\ b&a&\cdots&b&b\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ b&b&\cdots&a&b\\ b&b&\cdots&b&a\end{pmatrix} \tag{3.41}\]
This matrix has always only two kinds of eigenvalues, namely \(a-b\) and \(a+(K-1)b\), thus, for the stability of the parallel state, after computing (3.39) and (3.40), we have only to check for which point in the \((d-\beta^{-1})\) plane both \(a-b\) and \(a+(K-1)b\) are positive. The region, in the phase diagrams of Fig. 6, where the parallel regime is stable is depicted in green.
2.4 Hierarchical state: \(\bar{\boldsymbol{n}}=\bar{n}_{d,\rho,\beta}((1-d),d(1-d),d^{2}(1-d),...)\)
In this case the structure of the solution has the hierarchical form \(\bar{\boldsymbol{n}}=\bar{n}_{d,\rho,\beta}((1-d),d(1-d),d^{2}(1-d),...)\) and the region left untreated so far in the phase diagram, namely the white region in the plots of Fig. 6, is the room left to such hierarchical regime.
### From the Cost function to the Loss function
We finally comment on the persistence -in the present approach- of quantifiers related to the evaluation of the pattern recognition capabilities of neural networks, i.e. the Mattis magnetization, also as quantifiers of a good learning process. The standard Cost functions used in Statistical Mechanics of neural networks (e.g., the Hamiltonians) can be related one-to-one to standard Loss functions used
in Machine Learning (i.e. the squared sum error functions), namely, once introduced the two Loss functions \(L_{\mu}^{+}:=(1/2N)||\xi^{\mu}-\sigma||^{2}=1-m_{\mu}\) and \(L_{\mu}^{-}=(1/2N)||\xi^{\mu}+\sigma||^{2}=1+m_{\mu}\)5, it is immediate to show that
Footnote 5: Note that in the last passage we naturally highlighted the presence of the Mattis magnetization in these Loss functions.
\[H(\mathbf{\sigma}|\mathbf{\xi})=\frac{-1}{2N}\sum_{i,j}^{N,N}\sum_{\mu}^{K}\xi_{i}^{\mu }\xi_{j}^{\mu}\sigma_{i}\sigma_{j}\equiv-N\sum_{\mu}^{K}\left(1-L_{\mu}^{+} \cdot L_{\mu}^{-}\right),\]
thus minimizing the former implies minimizing the latter such that, if we are extremizing w.r.t. the neurons we are performing machine retrieval (i.e. pattern recognition), while if we extremize w.r.t. the weights we perform machine leaning: indeed, at least in this setting, learning and retrieval are two faces of the same coin (clearly the task here, from a machine learning perspective, is rather simple as the network is just asked to correctly classify the examples and possibly generalize).
In Fig. 7 we inspect what happens to these Loss functions -pertaining to the various archetypes-as the Cost function gets minimized: we see that, at difference with the standard Hopfield model (where solely one Loss function per time diminishes its value), in this parallel learning setting several Loss functions (related to different archetypes) get simultaneously lowered, as expected by a parallel learning machine.
## 4 Conclusions
Since the AGS milestones on Hebbian learning dated 1985 [9; 10], namely the first comprehensive statistical mechanical theory of the Hopfield model for pattern recognition and associative memory, attractor neural networks have experienced an unprecedented growth and the bulk of techniques developed for spin glasses in these four decades (e.g. replica trick, cavity method, message passage,
Figure 7: Left: Parallel minimization of several (mean square-error) Loss functions \(L_{\pm}=||\xi^{\mu}\pm\sigma||^{2}\) (each pertaining to a different archetype) as the noise in the data-set \(r\) is varied. Here: \(M=25\), \(N=10000\). The horizontal gray dashed lines are the saturation level of the Loss functions, namely \(1-\frac{d}{2}-(1-d)d^{\mu-1}\). We get \(r_{\otimes}\) (the vertical black line) by the inversion of (3.20). Right: Parallel minimization of several (mean square-error) Loss function \(L_{\pm}=||\xi^{\mu}\pm\sigma||^{2}\) (each pertaining to a different archetype) as the data-set size \(M\) is varied: as M grows the simultaneous minimization of more than one Loss functions takes place, at difference with learning via standard Hebbian mechanisms where one Loss function -dedicate to a single archetype- is minimized per time. Orange and blue lines pertain to Loss functions of other patterns that, at these levels of dilution and noise, can not be minimized at once with the previous ones.
interpolation) acts now as a prosperous cornucopia for explaining the emergent information processing capabilities that these networks show as their control parameters are made to vary.
In these regards, it is important to stress how nowadays it is mandatory to optimize AI protocols (as machine learning for complex structured data-sets is still prohibitively expensive in terms of energy consumption [38]) and, en route toward a Sustainable AI (SAI), statistical mechanics may still pave a main theoretical strand: in particular, we highlight how the knowledge of the _phase diagrams_ related to a given neural architecture (that is the ultimate output product of the statistical mechanical approach) allows to set "a-priori" the machine in the optimal working regime for a given task thus unveiling a pivotal role of such a methodology even for a conscious usage of AI (e.g. it is useless to force a standard Hopfield model beyond its critical storage capacity)6.
Footnote 6: Further, still searching for optimization of resources, such a theoretical approach can also be used to inspect key computational shortcuts (as, e.g. early stopping criteria faced in Appendix C.1 or the role of the size of the mini-batch used for training [40] or of the flat minima emerging after training [15].
Focusing on Hebbian learning, however, while the original AGS theory remains a solid pillar and a paradigmatic reference in the field, several extensions are required to keep it up to date to deal with modern challenges: the first generalization we need is to move from a setting where the machine stores already defined patterns (as in the standard Hopfield model) toward a more realistic learning procedure where these patterns are unknown and have to be inferred from examples: the Hebbian storage rule of AGS theory quite naturally generalizes toward both supervised and an unsupervised learning prescriptions [5, 13]. This enlarges the space of the control parameters from \(\alpha,\beta\) (or \(K\), \(N\), \(\beta\)) of the standard Hopfield model toward \(\alpha,\beta,\rho\) (or \(K\), \(N\), \(\beta\), \(M\), \(r\)) as we now deal also with a data-set where we have \(M\) examples of mean quality r for each pattern (archetype) or, equivalently, we speak of a data-set produced at given entropy \(\rho\).
Once this is accomplished, the second strong limitation of the original AGS theory that must be relaxed is that patterns share the same length and, in particular, this equals the size of the network (namely in the standard Hopfield model there are N neurons to handle patterns, whose length is exactly N for all of them): a more general scenario is provided by dealing with patterns that contain different amounts of information, that is patterns are diluted. Retrieval capabilities of the Hebbian setting at work with diluted patterns have been extensively investigated in the last decade [2, 3, 23, 24, 28, 35, 42, 44, 47] and it has been understood how, dealing with patterns containing sparse entries, the network automatically is able to handle several of them in parallel (a key property of neural networks that is not captured by standard AGS theory). However the study of the parallel learning of diluted patterns was not addressed in the Literature and in this paper we face this problem, confining this first study to the low storage regime, that is when the number of patterns scales at most logarithmically with the size of the network. Note that this further enlarges the space of the control parameters by introducing the dilution \(d\): we have several control parameters because the network information processing capabilities are enriched w.r.t. the bare Hopfield reference7.
Footnote 7: However we clarify how it could be inappropriate to speak about structural differences among the standard Hopfield model and the present multitasking counterpart: ultimately these huge behavioral differences are just consequences of the different nature of the the data-sets provided to the same Hebbian network during training.
We have shown here that if we supply to the network a data-set equipped with dilution, namely a sparse data-set whose patterns contain -on average- a fraction \(d\) of blank entries (whose value is 0) and, thus, a fraction \((1-d)\) of informative entries (whole values can be \(\pm 1\)), then the network spontaneously undergoes parallel learning and behaves as a multitasking associative memory able to learn, store and retrieve multiple patterns in parallel. Further, focusing on neurons, the Hamiltonian of the model plays as the Cost function for neural dynamics, however, moving the att
have shown how the latter is one-to-one related to the standard (mean square error) Loss function in Machine Learning and this resulted crucial to prove that, by experiencing a diluted data-set, the network lowers in parallel several Loss functions (one for each pattern that is learning from the experienced examples).
For mild values of dilution, the most favourite displacement of the Mattis magnetizations is a _hierarchical ordering_, namely the intensities of these signals scale as power laws w.r.t. their information content \(m_{K}\sim d^{K}\cdot(1-d)\), while at high values of dilution a _parallel ordering_, where all these amplitudes collapse to the same value, prevails: the phase diagrams of these networks properly capture these different working regions.
Remarkably, confined to the low storage regime (where glassy phenomena can be neglected), the presence (or its lacking) of a teacher does not alter the above scenario and the threshold for a secure learning, namely the minimal required amount of examples (given the constraints, that is the noise in the data-set r, the amount of different archetypes \(K\) to cope with, etc.) \(M_{\otimes}\) that guarantees that the network is able to infer the archetype and thus generalize, is the same for supervised and unsupervised protocols and its value has been explicitly calculated: this is another key point toward sustainable AI. Clearly there is still a long way to go before a full statistical mechanical theory of extensive parallel processing will be ready, yet this paper acts as a first step in this direction and we plan to report further steps in a near future.
A more general sampling scenario
The way in which we add noise over the archetypes to generate the data-set in the main text (see eq. (9)) is a rather peculiar one as, in each example, it preserves the number but also the positions of lacunae already present in the related archetype. This implies that the noise can not affect the amplitudes of the original signal, i.e. \(\sum_{i}(\eta^{\mu,a}_{i})^{2}=\sum_{i}(\xi^{\mu}_{i})^{2}\) holds for any \(a\) and \(\mu\), while we do expect that with more general kinds of noise the dilution, this property is not preserved sharply.
Here we consider the case where the number of blank entries present in \(\mathbf{\xi}^{\mu}\) is preserved on average in the related sample \(\{\eta^{\mu,a}\}_{a=1,\dots,M}\) but lacunae can move along the examples: this more realistic kind of noise gives rise to cumbersome calculations (still analytically treatable) but should not affect heavily the capabilities of learning, storing and retrieving of these networks (as we now prove).
Specifically here we define the new kind of examples \(\tilde{\eta}^{\mu,a}_{i}\) (that we can identify from the previous ones \(\eta^{\mu,a}_{i}\) by labeling them with a tilde) in the following way
**Definition 10**.: _Given \(K\) random patterns \(\mathbf{\xi}^{\mu}\) (\(\mu=1,...,K\)), each of length \(N\), whose entries are i.i.d. from_
\[\mathbb{P}(\xi^{\mu}_{i})=\frac{(1-d)}{2}\delta_{\xi^{\mu}_{i},-1}+\frac{(1-d )}{2}\delta_{\xi^{\mu}_{i},+1}+d\delta_{\xi^{\mu}_{i},0}, \tag{10}\]
_we use these archetypes to generate \(M\times K\) different examples \(\{\tilde{\eta}^{\mu,a}_{i}\}^{a=1,\dots,M}\) whose entries are depicted following_
\[\begin{split}&\mathbb{P}(\tilde{\eta}^{\mu,a}_{i}|\xi^{\mu}_{i}= \pm 1)=A_{\pm}(r,s)\delta_{\tilde{\eta}^{\mu,a}_{i},\xi^{\mu}_{i}}+B_{\pm}(r,s) \delta_{\tilde{\eta}^{\mu,a}_{i},-\xi^{\mu}_{i}}+C_{\pm}(r,s)\delta_{\tilde{ \eta}^{\mu,a}_{i},0}\\ &\mathbb{P}(\tilde{\eta}^{\mu,a}_{i}|\xi^{\mu}_{i}=0)=A_{0}(r,s) \delta_{\tilde{\eta}^{\mu,a}_{i},\xi^{\mu}_{i}}+B_{0}(r,s)\delta_{\tilde{\eta }^{\mu,a}_{i},+1}+C_{0}(r,s)\delta_{\tilde{\eta}^{\mu,a}_{i},-1}\end{split} \tag{11}\]
_for \(i=1,\dots,N\) and \(\mu=1,\dots,K\), where we pose_
\[\begin{split}& A_{\pm}(r,s)=\frac{1+r}{2}\left[1-\frac{d}{1-d}(1 -s)\right]+\frac{d(1-s)(1-r)}{4(1-d)}\,,\qquad A_{0}(r,s)=\frac{1+s}{2}\,,\\ & B_{\pm}(r,s)=\frac{1-r}{2}\left[1-\frac{d}{1-d}(1-s)\right]+ \frac{d(1-s)(1+r)}{4(1-d)}\,,\qquad B_{0}(r,s)=\frac{1-s}{4}\,,\\ & C_{\pm}(r,s)=\frac{d}{2(1-d)}(1-s)\,,\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad C_{0}(r,s)=\frac{1-s}{4}\,,\end{split} \tag{12}\]
_with \(r,s\in[0;1]\) (whose meaning we specify soon,_ vide infra_)._
Equation (11) codes for the new noise, the values of the coefficients presented in (12) have been chosen in order that all the examples contain on average the same fraction \(d\) of null entries as the original archetypes. To see this it is enough to check that the following relation holds for each \(a=1,\dots,M\), \(i=1,\dots,N\) and \(\mu=1,\dots,K\)
\[\mathbb{P}(\tilde{\eta}^{\mu,a}_{i}=0)=\sum_{x\in\{-1,0,1\}}\mathbb{P}(\tilde{ \eta}^{\mu,a}_{i}=0|\xi^{\mu}_{i}=x)\mathbb{P}(\xi^{\mu}_{i}=x)=C_{\pm}(r,s)(1 -d)+A_{0}(r,s)d=d\,. \tag{13}\]
Defined the data-set, the cost function follows straightforwardly in Hebbian settings as
**Definition 11**.: _Once introduced \(N\) Ising neurons \(\sigma_{i}=\pm 1\) (\(i=1,...,N\)) and the data-set considered in the definition above, the Cost function of the multitasking Hebbian network equipped with not-preserving-dilution noise reads as_
\[\mathcal{H}^{(sup)}_{N,K,M,r,s,d}(\mathbf{\sigma}|\tilde{\mathbf{\eta}})=-\frac{1}{N} \frac{1}{(1-d)(1+\tilde{\rho})}\sum_{\mu=1}^{K}\sum_{i,j=1}^{N,N}\left(\frac{1 }{\tilde{r}M}\sum_{a=1}^{M}\tilde{\eta}^{\mu,a}_{i}\right)\left(\frac{1}{ \tilde{r}M}\sum_{b=1}^{M}\tilde{\eta}^{\mu,b}_{j}\right)\sigma_{i}\sigma_{j}, \tag{14}\]
_where_
\[\tilde{r}=\frac{r}{(1-d)}\left[1-\frac{d}{2}(5-3s)\right]\] (A.6)
_and \(\tilde{\rho}\) is the generalization of the data-set entropy, defined as:_
\[\tilde{\rho}=\frac{1-\tilde{r}^{2}}{M\tilde{r}^{2}}\,.\] (A.7)
**Definition 12**.: _The suitably re-normalized example's magnetizations \(n_{\mu}\) read as_
\[n_{\mu}:=\frac{1}{(1+\tilde{\rho})}\frac{1}{N}\sum_{i=1}^{N}\left(\frac{1}{ \tilde{r}M}\sum_{a=1}^{M}\tilde{\eta}_{i}^{\mu,a}\right)\sigma_{i}\,.\] (A.8)
En route toward the statistical pressure, still preserving Guerra's interpolation as the underlying technique, we give the next
**Definition 13**.: _Once introduced the noise \(\beta\in\mathbb{R}^{+}\), an interpolating parameter \(t\in(0,1)\), the \(K+1\) auxiliary fields \(J\) and \(\psi_{\mu}\) (\(\mu\in(1,...,K)\)), the interpolating partition function related to the model defined by the Cost function (A.5) reads as_
\[\mathcal{Z}^{(sup)}_{\beta,N,K,M,r,s,d}(\boldsymbol{\xi},\boldsymbol{\tilde{ \eta}};J,t)=\sum_{\{\boldsymbol{\sigma}\}}\exp\Bigg{[}\ J\sum_{\mu,i=1}^{K,N} \xi_{i}^{\mu}\sigma_{i}+t\beta N\frac{(1+\tilde{\rho})}{2(1-d)}\sum_{\mu=1}^{ K}n_{\mu}^{2}(\boldsymbol{\sigma})+(1-t)N\sum_{\mu=1}^{K}\psi_{\mu}\,n_{\mu}( \boldsymbol{\sigma})\Bigg{]}.\] (A.9)
_and the interpolating statistical pressure \(\mathcal{A}_{\beta,K,M,r,s,d}=\lim_{N\to\infty}A_{\beta,N,K,M,r,s,d}\) induced by the partition function (A.9) reads as_
\[A_{\beta,N,K,M,r,s,d}(J,t)=\frac{1}{N}\mathbb{E}\Big{[}\ln\mathcal{Z}^{(sup) }_{\beta,N,K,M,r,s,d}(\boldsymbol{\xi},\boldsymbol{\tilde{\eta}};J,t)\Big{]}\] (A.10)
_where \(\mathbb{E}=\mathbb{E}_{\xi}\mathbb{E}_{(\tilde{\eta}|\xi)}\)._
**Remark 3**.: _Of course, as in the model studied in the main text still with Guerra's interpolation technique, we aim to find an explicit expression (in terms of the control and order parameters of the theory) of the interpolating statistical pressure evaluated at \(t=1\) and \(J=0\)._
We thus perform the computations following the same steps of the previous investigation: the \(t\) derivative of interpolating pressure is given by
\[\frac{d\mathcal{A}_{\beta,K,M,r,s,d}(J,t)}{dt}=\,\frac{\beta}{2(1-d)}(1+\tilde {\rho})\sum_{\mu=1}^{K}\langle n_{\mu}^{2}\rangle_{t}-\sum_{\mu=1}^{K}\psi_{ \mu}\langle n_{\mu}\rangle_{t}.\] (A.11)
fixing
\[\psi_{\mu}=\frac{\beta}{1-d}(1+\tilde{\rho})\bar{n}_{\mu}\] (A.12)
and computing the one-body term
\[\begin{split}\mathcal{A}_{\beta,K,M,r,s,d}(J,t=0)&= \mathbb{E}\ln\,\left[2\cosh\left(\sum_{\mu=1}^{K}\psi_{\mu}\frac{1}{(1+\tilde {\rho})}\frac{1}{\tilde{r}M}\sum_{a=1}^{M}\tilde{\eta}_{i}^{\mu,a}+J\sum_{\mu= 1}^{K}\xi^{\mu}\right)\right]\\ &=\mathbb{E}\ln\,\left\{2\cosh\left[\frac{\beta}{1-d}\sum_{\mu=1 }^{K}\bar{n}_{\mu}\left(\frac{1}{\tilde{r}M}\sum_{a=1}^{M}\tilde{\eta}_{i}^{ \mu,a}\right)+J\sum_{\mu=1}^{K}\xi^{\mu}\right]\right\}.\end{split}\] (A.13)
We get the final expression as \(N\to\infty\) such that we can state the next
**Theorem 3**.: _In the thermodynamic limit \((N\to\infty)\) and in the low load regime \((K/N\to 0)\), the quenched statistical pressure of the multitasking Hebbian network equipped with not-preserving-dilution noise, whatever the presence of a teacher, reads as_
\[\mathcal{A}_{\beta,K,M,r,s,d}(J)\,=\,\mathbb{E}\left\{\ln\left[2\cosh\left( \beta^{{}^{\prime}}\sum_{\mu=1}^{K}\bar{n}_{\mu}\tilde{\eta}^{\mu}+J\sum_{\mu= 1}^{K}\xi^{\mu}\right)\,\right]\right\}-\frac{\beta^{{}^{\prime}}}{2}(1+\tilde {\rho})\sum_{\mu=1}^{K}\bar{n}_{\mu}^{2}. \tag{110}\]
_where \(\beta^{{}^{\prime}}=\beta/(1-d)\), \(\mathbb{E}=\mathbb{E}_{\xi}\mathbb{E}_{(\tilde{\eta}|\xi)}\) and \(\tilde{\eta}^{\mu}=\frac{1}{\tilde{r}M}\sum_{a=1}^{M}\tilde{\eta}_{i}^{\mu,a}\)and the values \(\bar{n}_{\mu}\) must fulfill the following self-consistent equations_
\[\bar{n}_{\mu}=\frac{1}{(1+\tilde{\rho})}\mathbb{E}\left\{\left[\tanh\left( \beta^{{}^{\prime}}\sum_{\nu=1}^{K}\bar{n}_{\nu}\tilde{\eta}^{\nu}\right) \,\right]\tilde{\eta}^{\mu}\right\}\quad\text{for}\;\;\mu=1,\ldots,K\,, \tag{111}\]
_that extremize the statistical pressure \(\mathcal{A}_{\beta,K,M,r,s,d}(J=0)\) w.r.t. them._
Furthermore, the simplest path to obtain a self-consistent equation also for the Mattis magnetization \(m_{\mu}\) is by considering the auxiliary field \(J\) coupled to \(m_{\mu}\) namely \(\bar{m}_{\mu}=\nabla_{J}\mathcal{A}_{\beta,K,M,r,s,d}(J)|_{J=0}\) to get
\[\bar{m}_{\mu}=\mathbb{E}\left\{\tanh\left[\beta^{{}^{\prime}}\sum_{\nu=1}^{K }\bar{n}_{\nu}\tilde{\eta}^{\nu}\right]\xi^{\mu}\right\}\quad\text{for}\;\;\mu =1,\ldots,K\,. \tag{112}\]
We do not plot these new self-consistency equations as, in the large \(M\) limit, there are no differences w.r.t. those obtained in the main text.
## Appendix B On the data-set entropy \(\rho\)
In this appendix, focusing on a single generic bit, we deepen the relation between the conditional entropy \(H(\xi_{i}^{\mu}|\mathbf{\eta}_{i}^{\mu})\) of a given pixel \(i\) regarding archetype \(\mu\) and the information provided by the data-set regarding such a pixel, namely the block \(\left(\eta_{i}^{\mu,1},\eta_{i}^{\mu,2},\ldots,\eta_{i}^{\mu,M}\right)\) to justify why we called \(\rho\) the data-set entropy in the main text. As calculations are slightly different among the two analyzed models (the one preserving dilution position provided in the main text and the generalized one given in the previous appendix) we repeat them model by model for the sake of transparency.
### I: multitasking Hebbian network equipped with not-affecting-dilution noise
Let us focus on the \(\mu\)-th pattern and the \(i\)-th digit, whose related block is
\[\eta_{i}^{\mu}=\left(\eta_{i}^{\mu,1},\eta_{i}^{\mu,2},\ldots,\eta_{i}^{\mu,M }\right); \tag{113}\]
the error probability for any single entry is
\[\mathbb{P}(\xi_{i}^{\mu}\neq 0)\mathbb{P}(\eta_{i}^{\mu,a}\neq\xi_{i}^{\mu})=(1-d )(1-r)/2 \tag{114}\]
and, by applying the majority rule on the block, it is reduced to
\[\mathbb{P}(\xi_{i}^{\mu}\neq 0)\mathbb{P}\left(\text{sign}\Big{(}\sum_{a}\eta_ {i}^{\mu,a}\xi_{i}^{\mu}\Big{)}=-1\right)\underset{M\gg 1}{\approx}\frac{(1-d)}{2} \left[1-\text{erf}\left(\frac{1}{\sqrt{2\rho}}\right)\right]. \tag{115}\]
Thus
\[H_{d,r,M}(\boldsymbol{\xi}^{\mu}|\boldsymbol{\eta}^{\mu})=-\left[x(d,r,M)\log_{2}x( d,r,M)+y(d,r,M)\log_{2}y(d,r,M)\right]\] (B.4)
where
\[x(d,r,M)=\frac{(1-d)}{2}\left[1-\mathrm{erf}\left(\frac{1}{\sqrt{2\rho}}\right) \right]\;,\;\;y(d,r,M)=1-x(d,r,M)\,.\] (B.5)
### II: multitasking Hebbian network equipped with not-preserving-dilution noise
Let us focus on the \(\mu\)-th pattern and the \(i\)-th digit, whose related block is
\[\tilde{\eta}_{i}^{\mu}=\left(\tilde{\eta}_{i}^{\mu,1},\tilde{\eta}_{i}^{\mu,2 },\ldots,\tilde{\eta}_{i}^{\mu,M}\right);\] (B.6)
the error probability for any single entry is
\[\mathbb{P}(\xi_{i}^{\mu}\neq 0)\mathbb{P}(\tilde{\eta}_{i}^{\mu,a}\xi_{i}^{ \mu}\neq+1|\xi_{i}^{\mu}\neq 0)+\mathbb{P}(\xi_{i}^{\mu}=0)\mathbb{P}( \tilde{\eta}_{i}^{\mu,a}\neq 0|\xi_{i}^{\mu}=0)=d(1-s)\,.\] (B.7)
By applying the majority rule on the block, it is reduced to
\[\begin{split}&\mathbb{P}(\xi_{i}^{\mu}\neq 0)\left[1-\mathbb{P} \Big{(}\mathrm{sign}(\hat{\eta}_{i}^{\mu}\xi_{i}^{\mu})=+1\Big{|}\xi_{i}^{\mu }\neq 0\Big{)}\right]+\mathbb{P}(\xi_{i}^{\mu}=0)\mathbb{P}\Big{(}\mathrm{ sign}[|\hat{\eta}_{i}^{\mu}|]=+1\Big{|}\xi_{i}^{\mu}=0\Big{)}\\ &\underset{M\gg 1}{\approx}\frac{(1-d)}{2}\left\{1-\mathrm{erf} \left[\left(2\tilde{\rho}-\frac{d(1-s)}{(1-d)M\tilde{r}^{2}}\right)^{-1/2} \right]\right\}+\frac{d}{2}\;\left\{1-\mathrm{erf}\left[\left(\frac{1-s}{M \tilde{r}^{2}}\right)^{-1/2}\right]\right\}\,.\end{split}\] (B.8)
Figure 8: Comparison of the numerical solution of the self consistency equations related to the Mattis magnetization in the two models: upper panel is due to the first model (reported in the main text), lower panel reports on the second model (deepened here). Beyond a different transient at small \(M\) the two models behave essentially in the same way.
Thus
\[H_{d,r,s,M}(\xi_{i}^{\mu}|\tilde{\mathbf{\eta}}_{i}^{\mu})=-\left[x(d,r,s,M)\log_{2}x(d, r,s,M)+y(d,r,s,M)\log_{2}y(d,r,s,M)\right] \tag{114}\]
where
\[x(d,r,s,M)=\frac{(1-d)}{2}\left\{1-\text{erf}\left[\left(2\tilde{\rho}-\frac{d (1-s)}{(1-d)M\tilde{r}^{2}}\right)^{-1/2}\right]\right\}+\frac{d}{2}\ \left\{1-\text{erf}\left[\left(\frac{1-s}{M\tilde{r}^{2}}\right)^{-1/2} \right]\right\}\]
\[y(d,r,s,M)=1-x(d,\tilde{\rho}) \tag{115}\]
Whatever the model, the conditional entropies \(H_{d,r,M}(\xi_{i}^{\mu}|\mathbf{\eta}_{i}^{\mu})\) and \(H_{d,r,s,M}(\xi_{i}^{\mu}|\tilde{\mathbf{\eta}}_{i}^{\mu})\) are a monotonic increasing functions of \(\rho\) and \(\tilde{\rho}\) respectively, hence the reason to calling \(\rho\) and \(\tilde{\rho}\) the entropy of the data-set.
## Appendix C Stability analysis: an alternative approach
### Stability analysis via signal-to-noise technique
The standard signal-to-noise technique [8] is a powerful method to investigate the stability of a given neural configuration in the noiseless limit \(\beta\to\infty\): by requiring that each neuron is aligned to its field (the post-synaptic potential that it is experiencing, i.e. \(h_{i}\sigma_{i}\geq 0\ \ \forall i\in(1,...,N)\)) this analysis allows to correctly classify which solution (stemming from the self-consistent equations for the order parameters) is preferred as the control parameters are made to vary and thus it can play as an alternative route w.r.t the standard study of the Hessian of the statistical pressure reported in the main text (see Sec. 3.2).
In particular, recently, a revised version of the signal-to-noise technique has been developed [11; 12] and in this new formulation it is possible to obtain the self-consistency equations for the order parameters explicitly so we can compare directly outcomes from signal-to-noise to outcomes from statistical mechanics. By comparison of the two routes that lead to the same picture, that is statistical mechanics and revised signal-to-noise technique, we can better comprehend the working criteria of these neural networks.
We suppose that the network is in the hierarchical configuration prescribed by eq. (6), that we denote as \(\mathbf{\sigma}=\mathbf{\sigma}^{*}\), and we must evaluate the local field \(h_{i}(\mathbf{\sigma}^{*})\) acting on the generic neuron \(\sigma_{i}\) in this configuration to check that \(h_{i}(\mathbf{\sigma}^{*})\sigma_{i}^{*}>0\) is satisfied for any \(i=1,\ldots,N\): should this be the case, the configuration would be stable, vice versa unstable.
Focusing on the supervised setting with no loss of generality (as we already discussed that the teacher essentially plays no role in the low storage regime) and selecting (arbitrarily) the hierarchical ordering as a test case to be studied, we start by re-writing the Hamiltonian (11) as
\[-\mathcal{H}_{N,K,M,r}(\mathbf{\sigma}|\mathbf{\eta})=\sum_{i=1}^{N}h_{i}(\mathbf{\sigma}) \sigma_{i}\,, \tag{116}\]
where the local fields \(h_{i}\) appear explicitly and are given by
\[h_{i}(\mathbf{\sigma})= \frac{1}{2N\,r^{2}M^{2}(1-d)(1+\rho)}\sum_{\mu=1}^{K}\sum_{j\neq i }^{N}\sum_{a,b}^{M_{\mu},M_{\mu}}\eta_{i}^{\mu,a}\eta_{j}^{\mu,b}\sigma_{j}\,. \tag{117}\]
The updating rule for the neural dynamics reads as
\[\sigma_{i}^{(n+1)}=\sigma_{i}^{(n)}\text{sign}\left(\tanh\left[\beta\sigma_{i}^{(n) }h_{i}^{(n)}(\boldsymbol{\sigma}^{(n)})\right]+\Gamma_{i}\right)\ \ \text{with}\ \ \Gamma_{i}\sim\mathcal{U}[-1;+1]\,, \tag{104}\]
that, in the zero fast-noise limit \(\beta\to+\infty\), reduces to
\[\sigma_{i}^{(n+1)}=\sigma_{i}^{(n)}\text{sign}\left(\sigma_{i}^{(n)}h_{i}^{(n) }(\boldsymbol{\sigma}^{(n)})\right). \tag{105}\]
To inspect the stability of the hierarchical parallel configuration, we initialize the network in such configuration, i.e., \(\boldsymbol{\sigma}^{(1)}=\boldsymbol{\sigma}^{*}\), then, following Hinton's prescription [21, 48]8 the one-step n-iteration \(\boldsymbol{\sigma}^{(2)}\) leads to an expression of the magnetization that reads as
Footnote 8: The _early stopping prescription_ given by Hinton and coworkers became soon very popular, yet it has been criticized in some circumstances (in particular where glassy features are expected to be strong and may keep the network out of equilibrium for very long times, see e.g. [7, 18, 39]): we stress that, in the present section, we are assuming the network has already reached the equilibrium further, confined to the low storage inspection, spin glass bottlenecks in thermalization should not be prohibitive.
\[m_{\mu}^{(2)}=\frac{1}{N}\sum_{i=1}^{N}\xi_{i}^{\mu}\sigma_{i}^{(2)}=\frac{1}{ N}\sum_{i=1}^{N}\xi_{i}^{\mu}\left[\sigma_{i}^{*}\text{sign}\left(\sigma_{i}^{*}\,h_ {i}^{(1)}(\boldsymbol{\sigma}^{*})\right)\right]; \tag{106}\]
Next, using the explicit expression of the hierarchical parallel configuration (6), we get
\[m_{1}^{(2)}=\frac{1}{N}\sum_{i=1}^{N}\left(\xi_{i}^{1}\right)^{2 }\text{sign}\left(\sigma_{i}^{*}\,h_{i}^{(1)}(\boldsymbol{\sigma}^{*})\right); \tag{107}\] \[m_{\mu>1}^{(2)}=\frac{1}{N}\sum_{i=1}^{N}\left(\xi_{i}^{\mu} \right)^{2}\prod_{\rho=1}^{\mu-1}\delta\left(\xi_{i}^{\rho}\right)\text{sign} \left(\sigma_{i}^{*}\,h_{i}^{(1)}(\boldsymbol{\sigma}^{*})\right);\]
by applying the central limit theorem to estimate the sums appearing in the definition of \(h_{i}^{(1)}\) for \(i=1,\ldots,N\), we are able to split, mimicking the standard signal-to-noise technique, a signal contribution (\(\kappa_{1,\mu}^{(1)}\)) and a noise contribution (\(\kappa_{2,\mu}^{(1)}\)) as presented in the following
\[\sigma_{i}^{*}h_{i}^{(1)}(\boldsymbol{\sigma}^{*})\sim\kappa_{1,\mu}^{(1)}+z_ {i}\sqrt{\kappa_{2,\mu}^{(1)}}\ \ \ \ \text{with}\ \ z_{i}\sim\mathcal{N}(0,1) \tag{108}\]
where
\[\kappa_{1,\mu}^{(1)}\coloneqq\mathbb{E}_{\xi}\mathbb{E}_{(\eta|\xi)}\left[ \sigma_{i}^{*}\,h_{i}^{(1)}(\boldsymbol{\sigma}^{*})\right]\ \ \ \ \kappa_{2,\mu}^{(1)}\coloneqq\mathbb{E}_{\xi}\mathbb{E}_{(\eta|\xi)}\left[ \sigma_{i}^{*}\,h_{i}^{(1)}(\boldsymbol{\sigma}^{*})\right]^{2} \tag{109}\]
Thus, Eq. (106) becomes
\[m_{\mu}^{(2)} = \left[\frac{1}{N}\sum_{i=1}^{N}\left(\xi_{i}^{\mu}\right)^{2} \prod_{\rho=1}^{\mu-1}\delta\left(\xi_{i}^{\rho}\right)\text{sign}\left( \kappa_{1,\mu}^{(1)}+z_{i}\sqrt{\kappa_{2,\mu}^{(1)}}\right)\right]\ \ \ \ \text{ with}\ \ \mu=1,2,\ldots,K\,. \tag{110}\]
For large values of \(N\), the arithmetic mean coincides with the theoretical expectation, thus
\[\frac{1}{N}\sum_{i=1}^{N}g(\xi_{i},z_{i})\ \xrightarrow[N\to\infty]{} \mathbb{E}_{\xi,z}[g(\xi,z)]=\mathbb{E}_{\xi}\left[\int\frac{dz}{\sqrt{2\pi}} e^{-\frac{z^{2}}{2}}g(\xi,z)\right]\,. \tag{111}\]
therefore, we can rewrite Eq. (104) as
\[m_{\mu}^{(2)}\,=\,(1-d)d^{\mu-1}\mathrm{erf}\left[\frac{\kappa_{1,\mu}^{(1)}}{ \sqrt{2\Big{(}\kappa_{2,\mu}^{(1)}-\Big{(}\kappa_{1,\mu}^{(1)}\Big{)}^{2}\, \Big{)}}}\right]\quad\text{ with }\;\mu=1,2,\ldots,K\,. \tag{105}\]
While we carry out the computations of \(\kappa_{1,\mu}^{(1)}\) and \(\kappa_{2,\mu}^{(1)}\) in Appendix C.2, here we report only their values, which are
\[\kappa_{1,\mu}^{(1)}=\frac{1}{2}\frac{1}{(1+\rho)}d^{\mu-1}\,,\quad\kappa_{2, \mu}^{(1)}=\frac{1}{4}\frac{1}{1+\rho}d^{\mu-1}\left[\frac{1+d+d^{2}-d^{2K-2 \mu+2}}{1+d}\right]. \tag{106}\]
So we get
\[m_{\mu}^{(2)}(d,K,\rho)\,=\,(1-d)d^{\mu-1}\mathrm{erf}\left[\frac{1}{\sqrt{2}} \frac{\sqrt{(1+d)d^{\mu-1}}}{\sqrt{(1+\rho)(1+d+d^{2}-d^{2K-2\mu+2})-d^{\mu}-d ^{\mu-1}}}\right] \tag{107}\]
Figure 9: Signal to Noise numerical inspection of the Mattis magnetizations for a diluted network with \(r=0.1\) and \(K=3\) in the hierarchical regime (at levels of pattern’s dilution \(d<d_{c}\) as reported in their titles): we highlight the agreement, as the saturation level is reached, among Signal to Noise analysis (orange dots) and the value of the magnetization of the first pattern found by the mechanical statistical approach (reported as solid red line). The dashed lines represent the Hebbian storing prescriptions at which the values of the magnetizations converge. The vertical black line depicts the critical amount of examples \(M_{\otimes}\) that must be experienced by the network to properly depict the archetypes: note that this value is systematically above, in M, that the point where all the bifurcations happened, hence all the magnetizations stabilized on their hierarchical displacements.
As shown in Fig. 9, as a critical amount of perceived examples is collected, this expression is in very good agreement with the estimate stemming from the numerical solution of the self-consistent equations and indeed we can finally state the last
**Theorem 4**.: _In the zero fast-noise limit (\(\beta\to+\infty\)), if the neural configuration_
\[\tilde{\mathbf{\sigma}}=\tilde{\mathbf{\sigma}}(\mathbf{\xi}) \tag{102}\]
_is a fixed point of the dynamics described by the sequential spin upload rule_
\[\sigma_{i}^{(n+1)}=\sigma_{i}^{(n)}\mathrm{sign}\left[\beta\sigma_{i}^{(n)}h_{i }^{(n)}(\mathbf{\sigma}^{(n)})\right] \tag{103}\]
_where_
\[h_{i}^{(n)}(\mathbf{\sigma})= \frac{1}{N\,M^{2}(1+\rho)r^{2}}\sum\limits_{\mu=1}^{K}\sum\limits _{j\neq i}^{N}\sum\limits_{a,b}^{M_{\mu},M_{\mu}}\eta_{i}^{\mu,a}\eta_{j}^{\mu,b }\sigma_{j}^{(n)}\,, \tag{104}\]
_then the order parameters \(n_{\mu}(\mathbf{\sigma})=[NM(1+\rho)r]^{-1}\sum\limits_{i}^{N}\sum\limits_{a}^{M} \eta_{i}^{\mu,a}\sigma_{i}\) must satisfy the following self equations_
\[n_{\mu} = \frac{1}{(1+\rho)}\mathbb{E}_{\mathbf{\xi}}\mathbb{E}_{(\mathbf{\eta}|\bm {\xi})}\Bigg{\{}\hat{\eta}^{\mu}\tilde{\sigma}(\mathbf{\xi})\,\mathrm{sign}\left[ \sum\limits_{\nu=1}^{K}n_{\nu}\hat{\eta}^{\nu}\tilde{\mathbf{\sigma}}(\mathbf{\xi}) \right]\Bigg{\}}\,, \tag{105}\]
_where we set \(\hat{\eta}^{\mu}=(Mr)^{-1}\sum\limits_{a}^{M}\eta^{\mu,a}\)._
**Remark 4**.: _The empirical evidence that, via early stopping criterion, we still obtain the correct solution proves a posteriori the validity of Hinton's recipe in the present setting and it tacitly candidate statistical mechanics as a reference also to inspect computational shortcuts._
Proof.: The local fields \(h_{i}\) can be rewrite using the definition of \(n_{\mu}\) as
\[h_{i}^{(n)}(\mathbf{\sigma})=\sum\limits_{\mu=1}^{K}n_{\mu}^{(n)}(\mathbf{\sigma}) \hat{\eta}_{i}^{\mu}\,, \tag{106}\]
in this way the upload rule can be recast as
\[\sigma_{i}^{(n+1)}=\sigma_{i}^{(n)}\mathrm{sign}\left[\sum\limits_{\mu=1}^{K} n_{\mu}^{(n)}(\mathbf{\sigma})\hat{\eta}_{i}^{\mu}\sigma_{i}^{(n)}\right]. \tag{107}\]
Computing the value of the \(n_{\mu}\)-order parameters at the \((n+1)\) step of uploading process we get
\[n_{\mu}^{(n+1)}(\mathbf{\sigma}) = \frac{1}{N(1+\rho)}\sum\limits_{i}^{N}\hat{\eta}_{i}^{\mu} \sigma_{i}^{(n+1)}\] \[= \frac{1}{N(1+\rho)}\sum\limits_{i}^{N}\hat{\eta}_{i}^{\mu}\sigma_ {i}^{(n)}\mathrm{sign}\left[\,\sum\limits_{\nu=1}^{K}n_{\nu}^{(n)}(\mathbf{\sigma} )\hat{\eta}_{i}^{\nu}\sigma_{i}^{(n)}\right]\,.\]
If \(\tilde{\mathbf{\sigma}}(\mathbf{\xi})\) is a fixed point of our dynamics, we must have \(\tilde{\mathbf{\sigma}}^{(n+1)}\equiv\tilde{\mathbf{\sigma}}^{(n)}\) and \(n^{(n+1)}(\mathbf{\sigma})\equiv n^{(n)}(\mathbf{\sigma})\), thus (114) becomes
\[n_{\mu}(\mathbf{\sigma})\,=\,\frac{1}{N(1+\rho)}\sum_{i}^{N}\hat{\eta}_{i}^{\mu} \tilde{\sigma}_{i}(\mathbf{\xi})\operatorname{sign}\left[\sum_{\nu=1}^{K}n_{\nu}( \mathbf{\sigma})\hat{\eta}_{i}^{\nu}\tilde{\sigma}_{i}(\mathbf{\xi})\right]. \tag{115}\]
For large value of \(N\), the arithmetic mean coincides with the theoretical expectation, thus
\[\frac{1}{N}\sum_{i=1}^{N}g(\eta_{i})\xrightarrow[N\to\infty]{}\mathbb{E}_{\eta }\Big{[}g(\eta)\Big{]} \tag{116}\]
therefore, (115) reads as
\[n_{\mu}\,=\,\frac{1}{(1+\rho)}\mathbb{E}_{\mathbf{\xi}}\mathbb{E}_{(\mathbf{\eta}|\bm {\xi})}\left\{\hat{\eta}^{\mu}\tilde{\sigma}(\mathbf{\xi})\operatorname{sign}\left[ \,\sum_{\nu=1}^{K}n_{\nu}\hat{\eta}^{\nu}\tilde{\sigma}(\mathbf{\xi})\right]\right\}\,. \tag{117}\]
where we used \(\mathbb{E}_{\mathbf{\eta}}=\mathbb{E}_{\mathbf{\xi}}\mathbb{E}_{(\mathbf{\eta}|\mathbf{\xi})}\).
**Corollary 2**.: _Under the hypothesis of the previous theorem, if the neural configuration coincides with the parallel configuration_
\[\tilde{\mathbf{\sigma}}(\mathbf{\xi})=\mathbf{\sigma}^{*}=\mathbf{\xi}^{1}+\sum_{\nu=2}^{K} \mathbf{\xi}^{\nu}\prod_{\rho=1}^{\nu-1}\delta\left(\mathbf{\xi}^{\rho}\right) \tag{118}\]
_then the order parameters \(n_{\mu}(\mathbf{\sigma})\) must satisfy the following self equation_
\[n_{\mu}\,=\,\frac{1}{(1+\rho)}\mathbb{E}_{\mathbf{\xi}}\mathbb{E}_{(\mathbf{\eta}|\bm {\xi})}\Bigg{\{}\hat{\eta}^{\mu}\left(\xi^{1}+\sum_{\lambda=2}^{K}\xi^{\lambda} \prod_{\rho=1}^{\lambda-1}\delta\left(\xi^{\rho}\right)\right)\,\operatorname {sign}\left[\sum_{\nu=1}^{K}n_{\nu}\hat{\eta}^{\nu}\left(\xi^{1}+\sum_{\lambda =2}^{K}\xi^{\lambda}\prod_{\rho=1}^{\lambda-1}\delta\left(\xi^{\rho}\right) \right)\right]\Bigg{\}}\,. \tag{119}\]
Proof.: We only have to replace in (109) the explicit form of \(\mathbf{\sigma}^{*}\) and we get the proof.
### Evaluation of momenta of the effective post-synaptic potential
In this section we want to describe the computation of first and second momenta \(\kappa_{1,\mu}^{(1)}\) and \(\kappa_{2,\mu}^{(1)}\) in Sec. C.1, we will present only the case of \(\mu=1\).
Let us start from \(\kappa_{1,\mu}^{(1)}\):
\[\kappa_{1,\mu}^{(1)}\Big{|}_{\xi_{i}^{1}=\pm 1}\coloneqq\mathbb{E}_{\xi} \mathbb{E}_{(\eta|\xi)}\left[\sigma_{i}^{*}h_{i}^{(1)}(\mathbf{\sigma}^{*})\Big{|} _{\xi_{i}^{1}=\pm 1}\right]=\frac{1}{2}\sum_{j\neq i}^{N}\frac{1}{r_{1}M_{1}^{2}(1+ \rho)}\mathbb{E}_{\xi}\mathbb{E}_{(\eta|\xi)}\left[\sum_{a,b}^{M_{1},M_{1}} \eta_{j}^{1}\left(\xi_{j}^{1}+\sum_{\nu=2}^{K}\xi_{j}^{\nu}\prod_{\rho=1}^{ \nu-1}\delta_{\xi_{j}^{\rho},0}\right)\right];\]
since \(\mathbb{E}_{\xi}[\xi_{i}^{\mu}]=0\) the only non-zero terms are the ones with \(\mu=1\):
\[\kappa_{1,\mu}^{(1)}\Big{|}_{\xi_{i}^{1}=\pm 1} \coloneqq\,\frac{1}{2}\sum_{j\neq i}^{N}\frac{1}{r_{1}M_{1}^{2}(1+ \rho)}\mathbb{E}_{\xi}\left[\sum_{a,b}^{M_{1},M_{1}}r_{1}\xi_{j}^{1}\left(\xi_ {j}^{1}+\sum_{\nu=2}^{K}\xi_{j}^{\nu}\prod_{\rho=1}^{\nu-1}\delta_{\xi_{j}^{ \rho},0}\right)\right] \tag{120}\] \[=\,\frac{1}{2NM_{1}^{2}r_{1}(1+\rho)}\sum_{j\neq 1}^{N}\sum_{a,b}^{M_{ 1},M_{1}}r_{1}=\frac{1}{2(1+\rho)}\]
where we used \(\mathbb{E}_{(\eta|\xi)}[\eta_{i}^{\mu,a}]=r\xi_{i}^{\mu}\). Moving on, we start the computation of \(\kappa_{2,\mu}^{(1)}\), due to \(\mathbb{E}_{\xi}[\xi_{i_{1}}^{\mu}\xi_{i_{1}}^{\nu}]=\delta^{\mu\nu}\), the only non-zero terms are:
\[\kappa_{2,\mu}^{(1)}\Big{|}_{\xi_{i}^{1}=\pm 1} \coloneqq \mathbb{E}_{\xi}\mathbb{E}_{(\eta|\xi)}\left[\{\sigma_{i}^{*}h_{ i}^{(1)}(\mathbf{\sigma}^{*})^{2}\Big{|}_{\xi_{i}^{1}=\pm 1}\right]\] \[\frac{1}{4N^{2}(1-d)^{2}}\sum_{\mu=1}^{K}\mathbb{E}_{\xi} \mathbb{E}_{(\eta|\xi)}\frac{1}{M_{\mu}^{4}r_{\mu}^{4}(1+\rho)^{2}}\sum_{k,j \neq i}^{N,N}\left(\sum_{\alpha_{1},a_{2},b_{1},b_{2}}^{M_{\mu}}\eta_{i}^{\mu,a _{1}}\eta_{i}^{\mu,a_{2}}\eta_{j}^{\mu,b_{1}}\eta_{k}^{\mu,b_{2}}\right)\] \[\left(\xi_{j}^{1}+\sum_{\nu_{1}=2}^{K}\xi_{j}^{\nu_{1}}\prod_{ \rho_{1}=1}^{\nu_{1}-1}\delta\left(\xi_{j}^{\rho_{1}}\right)\right)\left(\xi_ {k}^{1}+\sum_{\nu_{2}=2}^{K}\xi_{\nu_{2}}^{\nu_{2}-1}\prod_{\rho_{2}=1}^{\nu_{ 2}-1}\delta\left(\xi_{k}^{\rho_{2}}\right)\right)=A_{\mu=1}+B_{\mu>1}\]
namely we will analyze separately the case for \(\mu=1\) (\(A_{\mu=1}\)) and \(\mu>1\) (\(B_{\mu>1}\)).
\[A_{\mu=1} = \frac{1}{4N^{2}(1-d)^{2}}\mathbb{E}_{\xi}\mathbb{E}_{(\eta|\xi)} \frac{1}{M_{1}^{4}r_{1}^{4}(1+\rho)^{2}}\sum_{k,j\neq i}^{N,N}\left(\sum_{a_{1 },a_{2},b_{1},b_{2}}^{M_{1}}\eta_{i}^{1,a_{2}}\eta_{j}^{1,b_{1}}\eta_{k}^{1,b_{ 2}}\right)\left(\xi_{j}^{1}\xi_{k}^{1}\right) \tag{101}\] \[= \frac{1}{4N^{2}(1-d)^{2}}\frac{1}{M_{1}^{4}r_{1}^{4}(1+\rho)^{2} }\mathbb{E}_{(\eta|\xi=\pm 1)}\left[\sum_{a_{1},a_{2}}^{M_{1}}\eta_{i}^{1,a_{1}} \eta_{i}^{1,a_{2}}\right]\sum_{k,j\neq i}^{N,N}\mathbb{E}_{\xi}\mathbb{E}_{( \eta|\xi)}\left[\sum_{b_{1},b_{2}}^{M_{1}}\eta_{j}^{1,b_{1}}\eta_{k}^{1,b_{2} }\right]\left(\xi_{j}^{1}\xi_{k}^{1}\right)\] \[= \frac{1}{4}\frac{1}{(1+\rho)}\,;\]
\[B_{\mu>1} \coloneqq \frac{1}{4N^{2}(1-d)^{2}}\sum_{\mu>1}^{K}\mathbb{E}_{\xi} \mathbb{E}_{(\eta|\xi)}\frac{1}{M_{\mu}^{4}r_{\mu}^{4}(1+\rho)^{2}}\sum_{k,j \neq i}^{N,N}\left(\sum_{\alpha_{1},a_{2},b_{1},b_{2}}^{M_{\mu}}\eta_{i}^{\mu, a_{1}}\eta_{i}^{\mu,a_{2}}\eta_{j}^{\mu,b_{1}}\eta_{k}^{\mu,b_{2}}\right)\] \[\left(\prod_{\rho_{1}=1}^{\mu-1}\delta\left(\xi_{j}^{\rho_{1}} \right)\right)\left(\prod_{\rho_{2}=1}^{\mu-1}\delta\left(\xi_{k}^{\rho_{2}} \right)\right)\] \[= \frac{(1-d)^{-2}}{4N^{2}}\sum_{\mu=2}^{K}\frac{1}{M_{\mu}^{2}(1+ \rho)}\sum_{k,j\neq i}^{N,N}\sum_{b_{1},b_{2}}^{M_{\mu}}\mathbb{E}_{\xi} \left[\xi_{j}^{\mu}\xi_{k}^{\mu}\left(\prod_{\rho_{1}=1}^{\mu-1}\delta\left( \xi_{j}^{\rho_{1}}\right)\right)\left(\prod_{\rho_{2}=1}^{\mu-1}\delta\left( \xi_{k}^{\rho_{2}}\right)\right)\right]\] \[= \frac{1-d}{4(1+\rho)}\sum_{\mu=2}^{K}d^{2(\mu-1)}=\frac{1-d}{4(1+ \rho)}\frac{d^{2}-d^{2K}}{1-d^{2}}=\frac{1}{4(1+\rho)}\frac{d^{2}-d^{2K}}{1+d}\,.\]
Putting together Eq. (101) and Eq. (100) we reach the expression of \(\kappa_{2,\mu}^{(1)}\Big{|}_{\xi_{i}^{1}=\pm 1}\).
## Appendix D Explicit Calculations and Figures for the cases \(K=2\) and \(K=3\)
In this appendix, we collect the explicit expression of the self-consistent equations in (3.10) and (3.11) (focusing only on the cases of \(K=2\) and \(K=3\)) and some Figures obtained from their numerical solution.
### \(K=2\)
Fixing \(K=2\) and explicitly perform the mean with respect to \(\xi\), (3.10) and (3.11) read as
\[\begin{array}{rcl}\bar{n}_{1}&=&\frac{\bar{m}_{1}}{(1+\rho)}+\frac{\beta^{{}^{ \prime}}(1-d)\rho\,\bar{n}_{1}}{(1+\rho)}\bigg{[}1-d\,\mathcal{S}_{2}(\bar{n}_ {1},0)-\frac{(1-d)}{2}\mathcal{S}_{2}(\bar{n}_{1},-\bar{n}_{2})-\frac{(1-d)}{2 }\mathcal{S}_{2}(\bar{n}_{1},\bar{n}_{2})\bigg{]}\\ \\ \bar{n}_{2}&=&\frac{\bar{m}_{2}}{(1+\rho)}+\frac{\beta^{{}^{\prime}}(1-d)\rho \,\bar{n}_{2}}{(1+\rho)}\bigg{[}1-d\,\mathcal{S}_{2}(0,\bar{n}_{2})-\frac{(1-d) }{2}\mathcal{S}_{2}(\bar{n}_{1},-\bar{n}_{2})-\frac{(1-d)}{2}\mathcal{S}_{2}( \bar{n}_{1},\bar{n}_{2})\bigg{]}\\ \\ \bar{m}_{1}&=&\frac{(1-d)^{2}}{2}\bigg{[}\mathcal{T}_{2}(\bar{n}_{1},\bar{n}_ {2})+\mathcal{T}_{2}(\bar{n}_{1},-\bar{n}_{2})\bigg{]}+d(1-d)\mathcal{T}_{2}( \bar{n}_{1},0)\\ \\ \bar{m}_{1}&=&\frac{(1-d)^{2}}{2}\bigg{[}\mathcal{T}_{2}(\bar{n}_{1},\bar{n}_ {2})-\mathcal{T}_{2}(\bar{n}_{1},-\bar{n}_{2})\bigg{]}+d(1-d)\mathcal{T}_{2}(0,\bar{n}_{2})\end{array}\] (D.1)
where we used
\[\begin{array}{rcl}\mathcal{T}_{2}(x,y)&=&\mathbb{E}_{\lambda}\tanh\left[ \beta^{{}^{\prime}}\left(x+y+\lambda\sqrt{\rho\Big{(}x^{2}+y^{2}\Big{)}} \right)\right],\\ \\ \mathcal{S}_{2}(x,y)&=&\mathbb{E}_{\lambda}\tanh^{2}\left[\beta^{{}^{\prime}} \left(x+y+\lambda\sqrt{\rho\Big{(}x^{2}+y^{2}\Big{)}}\right)\right].\end{array}\] (D.2)
Solving numerically this set of equations we construct the plots presented in Fig.10.
### \(K=3\)
Moving on the case of \(K=3\) by following the same steps of the previous subsection, we get
\[\bar{n}_{1} = \frac{\bar{m}_{1}}{(1+\rho)}+\frac{\beta^{{}^{\prime}}(1-d)\rho\, \bar{n}_{1}}{(1+\rho)}\bigg{\{}1-d\,\frac{(1-d)}{2}\left[\mathcal{S}_{3}(\bar{ n}_{1},\bar{n}_{2},0)+\mathcal{S}_{3}(\bar{n}_{1},0,\bar{n}_{3})+\mathcal{S}_{3}( \bar{n}_{1},-\bar{n}_{2},0)+\mathcal{S}_{3}(\bar{n}_{1},0,-\bar{n}_{3})\right]\] \[-d^{2}\mathcal{S}_{3}(\bar{n}_{1},0,0)-\frac{(1-d)^{2}}{4}\left[ \mathcal{S}_{3}(\bar{n}_{1},\bar{n}_{2},\bar{n}_{3})+\mathcal{S}_{3}(\bar{n}_ {1},\bar{n}_{2},-\bar{n}_{3})+\mathcal{S}_{3}(\bar{n}_{1},-\bar{n}_{2},\bar{n} _{3})+\mathcal{S}_{3}(\bar{n}_{1},-\bar{n}_{2},-\bar{n}_{3})\right]\bigg{\}}\,,\] \[\bar{m}_{1} = \frac{(1-d)^{3}}{4}\bigg{[}\mathcal{T}_{3}(\bar{n}_{1},\bar{n}_{ 2},\bar{n}_{3})+\mathcal{T}_{3}(\bar{n}_{1},\bar{n}_{2},-\bar{n}_{3})+ \mathcal{T}_{3}(\bar{n}_{1},-\bar{n}_{2},\bar{n}_{3})+\mathcal{T}_{3}(\bar{n}_ {1},-\bar{n}_{2},-\bar{n}_{3})\bigg{]}\] \[+d\frac{(1-d)^{2}}{2}\bigg{[}\mathcal{T}_{3}(\bar{n}_{1},\bar{n}_ {2},0)+\mathcal{T}_{3}(\bar{n}_{1},0,\bar{n}_{3})+\mathcal{T}_{3}(\bar{n}_{1},- \bar{n}_{2})+\mathcal{T}_{3}(\bar{n}_{1},0,-\bar{n}_{3})\bigg{]}+d^{2}(1-d) \mathcal{T}_{3}(\bar{n}_{1},0,0)\,,\]
where we used
\[\begin{array}{rcl}\mathcal{T}_{3}(x,y,z)&=&\mathbb{E}_{\lambda}\tanh\left[ \beta^{{}^{\prime}}\left(x+y+z+\lambda\sqrt{\rho\Big{(}x^{2}+y^{2}+z^{2}\Big{)} }\right)\right]\,,\\ \\ \mathcal{S}_{3}(x,y,z)&=&\mathbb{E}_{\lambda}\tanh^{2}\left[\beta^{{}^{\prime}} \left(x+y+z+\lambda\sqrt{\rho\Big{(}x^{2}+y^{2}+z^{2}\Big{)}}\right)\right]\,. \end{array}\] (D.4)
In order to lighten the presentation, we reported only the expression of \(\bar{m}_{1}\) and \(\bar{n}_{1}\), the related expressions of \(\bar{m}_{2}(\bar{m}_{3})\) and \(\bar{n}_{2}(\bar{n}_{3})\) can be obtained by making the simple substitutions \(\bar{m}_{1}\longleftrightarrow\bar{m}_{2}(\bar{m}_{3})\), and \(\bar{n}_{1}\longleftrightarrow\bar{n}_{2}(\bar{n}_{3})\) in (D.3). The numerical solution of the previous set of equations is depicted in Fig.11.
## Appendix E Proofs
### Proof of Theorem 1
In this subsection we show the proof Proposition 1. In order to prove the aforementioned proposition, we put in front of it the following
**Lemma 1**.: _The \(t\) derivative of interpolating pressure is given by_
\[\frac{d\mathcal{A}^{(sup,unsup)}_{N,K,\beta,d,M,r}}{dt} = \frac{\beta}{2(1-d)}(1+\rho)\sum_{\mu=1}^{K}\mathbb{E}\,\omega_{ t}[n_{\mu}^{2}]-\sum_{\mu=1}^{K}\psi_{\mu}\mathbb{E}\,\omega_{t}[n_{\mu}]. \tag{100}\]
Since the computation is lengthy but not cumbersome we decided to omit it.
**Proposition 3**.: _In the low load regime, in the thermodynamic limit the distribution of the generic order parameter \(X\) is centres at its expectation value \(\bar{X}\) with vanishing fluctuations. Thus, being
Figure 10: Numerical resolution of the system of equations (100) for \(K=2\): we plot the behaviour of the magnetization \(\tilde{m}\) versus degree of dilution \(d\) for fixed \(r=0.2\) and different value of \(\beta\) (from right to left \(\beta=1000,6.66,3.33\)) and \(\rho\) (from top to bottom \(\rho=0.8,0.2,0.0\)). We stress that for \(\rho=0.0\) we recover the standard diluted model presented in Fig.1.
\(\Delta X=X-\bar{X}\), in the thermodynamic limit, the following relation holds_
\[\mathbb{E}\,\omega_{t}[(\Delta X)^{2}]\xrightarrow[N\to+\infty]{}0\,. \tag{100}\]
**Remark 5**.: _We stress that afterwards we use the relations_
\[\mathbb{E}\,\omega_{t}[(n_{\mu}-\bar{n}_{\mu})^{2}]=\mathbb{E}\,\omega_{t}[n_{ \mu}^{2}]-2\,\bar{n}_{\mu}\mathbb{E}\,\omega_{t}[n_{\mu}]+\bar{n}_{\mu}^{2}\,. \tag{101}\]
_which are computed with brute force with Newton's Binomial._
_Now, using these relations, if we fix the constants as_
\[\psi_{\mu}=\frac{\beta}{1-d}(1+\rho)\bar{n}_{\mu} \tag{102}\]
_in the thermodynamic limit, due to Proposition 3, the expression of derivative w.r.t. \(t\) becomes_
\[\frac{d\mathcal{A}^{(sup,unsup)}_{K,\beta,d,M,r}}{dt} = -\frac{\beta}{2(1-d)}(1+\rho)\sum_{\mu=1}^{K}\bar{n}_{\mu}^{2}. \tag{103}\]
Proof.: Let us start from finite size \(N\) expression. We apply the Fundamental Theorem of Calculus:
\[\mathcal{A}^{(sup,unsup)}_{N,K,\beta,d,M,r}=\mathcal{A}^{(sup,unsup)}_{N,K, \beta,d,M,r}(t=1)=\mathcal{A}^{(sup,unsup)}_{N,K,\beta,d,M,r}(t=0)+\int\limits _{0}^{1}\left.\partial_{s}\mathcal{A}^{(sup,unsup)}_{N,K,\beta,d,M,r}(s) \right|_{s=t}dt. \tag{104}\]
Figure 11: Numerical solution of the system of equations (101) for \(K=3\): we plot the behavior of the magnetization \(\bar{m}\) versus the degree of dilution \(d\) for fixed \(r=0.2\) and different value of \(\beta\) (from left to right \(\beta=1000,6.66,3.33\)) and \(\rho\) (from top to bottom \(\rho=0.8,0.2,0.0\)).
We have already computed the derivative w.r.t. \(t\) in Eq. (102). It only remains to calculate the one-body term:
\[\mathcal{Z}^{(sup,unsup)}_{N,K,\beta,d,M,r}(t=0)=\sum_{\{\sigma\}}\exp\Bigg{[} \sum_{i=1}^{N}\left(\sum_{\mu=1}^{K}\frac{\psi_{\mu}}{2(1+\rho)}\hat{\eta}^{\mu }+J\xi^{\mu}\right)\sigma_{i}\Bigg{]}. \tag{103}\]
Using the definition of quenched statistical pressure (100) we have
\[\mathcal{A}^{(sup,unsup)}_{K,\beta,d,M,r}(J,t=0)=\ln\,\left[2\cosh \left(\sum_{\mu=1}^{K}\frac{\psi_{\mu}}{2(1+\rho)}\hat{\eta}^{\mu}+J\xi^{\mu} \right)\right] \tag{104}\] \[=\mathbb{E}\left\{\ln 2\cosh\left[\frac{\beta}{1-d}\sum_{\mu=1}^{K }\bar{n}_{\mu}\hat{\eta}^{\mu}+J\xi^{\mu}\right]\right\}\]
where \(\mathbb{E}=\mathbb{E}_{\xi}\mathbb{E}_{(\eta|\xi)}\). Finally, putting inside (100) (104) and (102), we reach the thesis.
### Proof of Proposition 1
In this subsection we show the proof Proposition 1.
Proof.: For large data-sets, using the Central Limit Theorem we have
\[\hat{\eta}^{\mu}\sim\xi^{\mu}\left(1+\sqrt{\rho}\;Z_{\mu}\right)\,. \tag{105}\]
where \(Z_{\mu}\) is a standard Gaussian variable \(Z_{\mu}\sim\mathcal{N}(0,1)\). Replacing Eq. (105) in the self-consistency equation for \(\bar{n}\), namely Eq. (101), and applying Stein's lemma9
Footnote 9: This lemma, also known as Wick’s theorem, applies to standard Gaussian variables, say \(J\sim\mathcal{N}(0,1)\), and states that, for a generic function \(f(J)\) for which the two expectations \(\mathbb{E}\left(Jf(J)\right)\) and \(\mathbb{E}\left(\partial_{J}f(J)\right)\) both exist, then
\[\mathbb{E}\left(Jf(J)\right)=\mathbb{E}\left(\frac{\partial f(J)}{\partial J} \right)\,. \tag{106}\]
in order to recover the expression for \(\bar{m}_{\mu}\), we get the large data-set equation for \(\bar{n}_{\mu}\), i.e. Eq. (104).
We will use the relation
\[\mathbb{E}_{\lambda_{\mu}}\left[F\left(a+\sum_{\mu=1}^{K}b_{\mu}\lambda_{\mu} \right)\right]=\mathbb{E}_{x}\,\left[F\left(a+Z\sqrt{\sum_{\mu=1}^{K}b_{\mu} ^{2}}\right)\right]\,, \tag{107}\]
where \(\lambda_{\mu}\) and \(Z\) are i.i.d. Gaussian variables. Doing so, we obtain
\[g(\beta,\boldsymbol{\xi},Z,\bar{n})=\beta^{{}^{\prime}}\sum_{\nu=1}^{K}\bar{n }_{\nu}\xi^{\nu}+\beta^{{}^{\prime}}\sqrt{\rho}\sum_{\nu=1}^{K}Z_{\nu}\bar{n} _{\nu}^{2}\left(\xi^{\nu}\right)^{2}=\beta^{{}^{\prime}}\left(\sum_{\nu=1}^{K }\bar{n}_{\nu}\xi^{\nu}+Z\sum_{\nu=1}^{K}\sqrt{\rho\,\bar{n}_{\nu}^{2}\left( \xi^{\nu}\right)^{2}}\right)\,, \tag{108}\]
thus we reach the thesis.
**Corollary 3**.: _The self consistency equations in the large data-set assumption and null-temperature limit are_
\[\bar{m}_{\mu}\,=\,\mathbb{E}_{\xi}\left\{\operatorname{erf}\left[\left(\sum_{ \nu=1}^{K}\bar{m}_{\nu}\xi^{\nu}\right)\left(2\rho\sum_{\nu=1}^{K}\bar{m}_{\nu }^{2}\left(\xi^{\nu}\right)^{2}\right)^{-1/2}\right]\xi^{\mu}\right\}. \tag{109}\]
Proof.: In order to lighten the notation we rename
\[C=\tanh^{2}\left[g(\beta,\boldsymbol{\xi},Z,\bar{\boldsymbol{n}})\right]\,. \tag{101}\]
We start by assuming finite the limit
\[\lim_{\beta^{{}^{\prime}}\to\infty}\beta^{{}^{\prime}}(1-C)=D\in\mathbb{R} \tag{102}\]
and we stress that as \(\beta^{{}^{\prime}}\to\infty\) we have \(C\to 1\). As a consequence, the following reparametrization is found to be useful,
\[C=1-\frac{\delta C}{\beta^{{}^{\prime}}}\quad\text{as}\quad\beta^{{}^{\prime} }\to\infty. \tag{103}\]
Therefore, as \(\beta^{{}^{\prime}}\to\infty\), it yields
\[\bar{n}_{\mu}=\frac{\bar{m}_{\mu}}{1+\rho-\rho\,\delta C\,(1-d)} \tag{104}\]
to reach this result, we have also used the relation
\[\mathbb{E}_{z}\text{sign}[A+Bz]=\text{erf}\left[\frac{A}{\sqrt{2}B}\right]\, \tag{105}\]
where \(z\) is a Gaussian variable \(\mathcal{N}(0,1)\) and the truncated expression \(\bar{n}_{\mu}=\bar{m}_{\mu}/(1+\rho)\) for the first equation in (104).
This research has been supported by Ministero degli Affari Esteri e della Cooperazione Internazionale (MAECI) via the BULBUL grant (Italy-Israel), CUP Project n. F85F21006230001, and has received financial support from the Simons Foundation (grant No. 454949, G. Parisi) and ICSC - Italian Research Center on High Performance Computing, Big Data and Quantum Computing, funded by European Union - NextGenerationEU.
Further, this work has been partly supported by The Alan Turing Institute through the Theory and Methods Challenge Fortnights event _Physics-informed machine learning_, which took place on 16-27 January 2023 at The Alan Turing Institute headquarters.
E.A. acknowledges financial support from Sapienza University of Rome (RM120172B8066CB0).
E.A., A.A. and A.B. acknowledges GNFM-INdAM (Gruppo Nazionale per la Fisica Mamematica, Istituto Nazionale d'Alta Matematica), A.A. further acknowledges UniSalento for financial support via PhD-AI and AB further acknowledges the PRIN-2022 project _Statistical Mechanics of Learning Machines: from algorithmic and information-theoretical limits to new biologically inspired paradigms_.
|
2309.02138 | Generalized Simplicial Attention Neural Networks | The aim of this work is to introduce Generalized Simplicial Attention Neural
Networks (GSANs), i.e., novel neural architectures designed to process data
defined on simplicial complexes using masked self-attentional layers. Hinging
on topological signal processing principles, we devise a series of
self-attention schemes capable of processing data components defined at
different simplicial orders, such as nodes, edges, triangles, and beyond. These
schemes learn how to weight the neighborhoods of the given topological domain
in a task-oriented fashion, leveraging the interplay among simplices of
different orders through the Dirac operator and its Dirac decomposition. We
also theoretically establish that GSANs are permutation equivariant and
simplicial-aware. Finally, we illustrate how our approach compares favorably
with other methods when applied to several (inductive and transductive) tasks
such as trajectory prediction, missing data imputation, graph classification,
and simplex prediction. | Claudio Battiloro, Lucia Testa, Lorenzo Giusti, Stefania Sardellitti, Paolo Di Lorenzo, Sergio Barbarossa | 2023-09-05T11:29:25Z | http://arxiv.org/abs/2309.02138v1 | # Generalized Simplicial Attention Neural Networks
###### Abstract
The aim of this work is to introduce Generalized Simplicial Attention Neural Networks (GSANs), i.e., novel neural architectures designed to process data defined on simplicial complexes using masked self-attentional layers. Hinging on topological signal processing principles, we devise a series of self-attention schemes capable of processing data components defined at different simplicial orders, such as nodes, edges, triangles, and beyond. These schemes learn how to weight the neighborhoods of the given topological domain in a task-oriented fashion, leveraging the interplay among simplices of different orders through the Dirac operator and its Dirac decomposition. We also theoretically establish that GSANs are permutation equivariant and simplicial-aware. Finally, we illustrate how our approach compares favorably with other methods when applied to several (inductive and transductive) tasks such as trajectory prediction, missing data imputation, graph classification, and simplex prediction.
Topological signal processing, attention networks, topological deep learning, neural networks, simplicial complexes.
## I Introduction
Over the past few years, the rapid and expansive evolution of deep learning techniques has significantly enhanced the state-of-the-art in numerous learning tasks. From Feed-Forward [2] to Transformer [3], via Convolutional [4] and Recurrent [5] Neural Networks, increasingly sophisticated architectures have driven substantial advancements from both theoretical and practical standpoints. In today's world, data defined on irregular domains (e.g., graphs) are ubiquitous, with applications spanning social networks, recommender systems, cybersecurity, sensor networks, and natural language processing. Since their introduction [6, 7], Graph Neural Networks (GNNs) have demonstrated remarkable results in learning tasks involving data defined over a graph domain. Here, the versatility of neural networks is combined with prior knowledge about data relationships, expressed in terms of graph topology. The literature on GNNs is extensive, with various approaches explored, primarily grouped into spectral [8, 9] and non-spectral methods [10, 11, 12]. In a nutshell, the idea is to learn from data defined over graphs by computing a principled representation of node features through local aggregation with the information gathered from neighbors, defined by the underlying graph topology. This simple yet powerful concept has led to exceptional performance in many tasks such as node or graph classification [9, 13, 10] and link prediction [14], to name a few. At the same time, the introduction of attention mechanisms has significantly enhanced the performance of deep learning techniques. Initially introduced to handle sequence-based tasks [15, 16], these mechanisms allow for variable-sized inputs and focus on the most relevant parts of them. Attention-based models (including Transformers) have a wide range of applications, from learning sentence representations [17] to machine translation [15], from machine reading [18] to multi-label image classification [19], achieving state-of-the-art results in many of these tasks.
Pioneering works have generalized attention mechanisms to data defined over graphs [13, 20, 21]. However, despite their widespread use, graph-based representations can only account for pairwise interactions. As a result, graphs may not fully capture all the information present in complex interconnected systems, where interactions cannot be reduced to simple pairwise relationships. This is particularly evident in biological networks, where multi-way links among complex substances, such as genes, proteins, or metabolites exist [22]. Recent works on Topological Signal Processing (TSP) [23, 24, 25, 26, 27] have shown the advantages of learning from data defined on higher-order complexes, such as simplicial or cell complexes. These topological structures possess a rich algebraic description and can readily encode multi-way relationships hidden within the data. This has sparked interest in developing (deep) neural network architectures capable of handling data defined on such complexes, leading to the emergence of the field of Topological Deep Learning [28, 29]. In the sequel, we review the main topological neural network architectures, with emphasis to those built to process data on simplicial complexes (SCs).
**Related Works.** Recently, several neural architectures for simplicial data processing have been proposed. In [30], the authors introduced the concept of simplicial convolution, which was then exploited to build a principled simplicial neural network architecture that generalizes GNNs by leveraging on higher-order Laplacians. However, this approach does not enable separate processing for the lower and upper neighborhoods of a simplicial complex. Then, in [31], message passing neural networks (MPNNs) were adapted to simplicial complexes [32], and a Simplicial Weisfeiler-Lehman (SWL) coloring procedure was introduced to differentiate non-isomorphic SCs. The aggregation and updating functions in this model are capable of processing data exploiting lower and upper neighborhoods, and the interaction of simplices of different orders. The architecture in [31] can also be viewed as a generalization of the architectures in [33] and [34], with a specific aggregation function provided by simplicial filters [35]. In [36], recurrent MPNNs architectures were considered for flow interpolation and graph classification tasks. The works in [37, 38] introduced simplicial convolutional neural networks architectures that explicitly enable multi-hop processing based on upper and lower neighborhoods. These architectures also |
2305.19801 | Predicting protein stability changes under multiple amino acid
substitutions using equivariant graph neural networks | The accurate prediction of changes in protein stability under multiple amino
acid substitutions is essential for realising true in-silico protein re-design.
To this purpose, we propose improvements to state-of-the-art Deep learning (DL)
protein stability prediction models, enabling first-of-a-kind predictions for
variable numbers of amino acid substitutions, on structural representations, by
decoupling the atomic and residue scales of protein representations. This was
achieved using E(3)-equivariant graph neural networks (EGNNs) for both atomic
environment (AE) embedding and residue-level scoring tasks. Our AE embedder was
used to featurise a residue-level graph, then trained to score mutant stability
($\Delta\Delta G$). To achieve effective training of this predictive EGNN we
have leveraged the unprecedented scale of a new high-throughput protein
stability experimental data-set, Mega-scale. Finally, we demonstrate the
immediately promising results of this procedure, discuss the current
shortcomings, and highlight potential future strategies. | Sebastien Boyer, Sam Money-Kyrle, Oliver Bent | 2023-05-30T14:48:06Z | http://arxiv.org/abs/2305.19801v1 | Predicting protein stability changes under multiple amino acid substitutions using Equivariant Graph Neural Networks
###### Abstract
The accurate prediction of changes in protein stability under multiple amino acid substitutions is essential for realising true in-silico protein re-design. To this purpose, we propose improvements to state-of-the-art Deep learning (DL) protein stability prediction models, enabling first-of-a-kind predictions for variable numbers of amino acid substitutions, on structural representations, by decoupling the atomic and residue scales of protein representations. This was achieved using E(3)-equivariant graph neural networks (EGNNs) for both atomic environment (AE) embedding and residue-level scoring tasks. Our AE embedder was used to featurise a residue-level graph, then trained to score mutant stability (\(\Delta\Delta G\)). To achieve effective training of this predictive EGNN we have leveraged the unprecedented scale of a new high-throughput protein stability experimental dataset, Mega-scale. Finally, we demonstrate the immediately promising results of this procedure, discuss the current shortcomings, and highlight potential future strategies.
## 1 Introduction
Protein stability is a crucial component of protein evolution (Godoy-Ruiz et al., 2004), it lies at the root of our understanding of many human diseases (Peng & Alexov, 2016) and plays a major role in protein design and engineering (Qing et al., 2022). Protein stability is typically represented as the change in free energy, \(\Delta G\), between the unfolded and folded states (Matthews, 1993) and is a global feature of a protein. A negative \(\Delta G\) of folding indicates an energetically favourable protein conformation; the greater the magnitude of a negative \(\Delta G\), the more stable the conformation. Mutations can alter the favourability of a protein fold, with even single amino acid substitution events potentially disturbing the native conformation of a protein (Stefl et al., 2013). For example, a substitution from threonine to methionine in 12/15-Lipoxygenase is a cited potential cause of hereditary cardiovascular diseases (Schurmann et al., 2011); the mutation disrupts a chain of stabilising hydrogen bridges, causing structural instability and reducing catalytic activity. The mutational effect on protein stability is the difference in free energy of folding between the wild type (WT) and mutant proteins, \(\Delta\Delta G\)(Matthews, 1993). Mutagenic effects on protein stability can be determined experimentally using thermostability assays, with \(\Delta\Delta G\) being inferred from differences between WT and mutant denaturation curves (Bommarius et al., 2006). However, these assays are labourious and expensive; to adequately assess mutational effects at a higher throughput rate, researchers have turned to computational methods. The established precedent for computational modelling of mutant stability is empirical physics-informed energy functions, which rely on physical calculations to infer \(\Delta\Delta G\)(Marabotti et al., 2021). For example, Rosetta (Kellogg et al., 2011; Das & Baker, 2008) employs Monte Carlo runs to sample multiple protein conformations and predicts folding free energy from physical characteristics. These characteristics of Lennard-Jones interactions, inferred solvation potentials, hydrogen bonding and electrostatics are common to other packages such as FoldX (Schymkowitz et al., 2005). While Molecular Dynamics software, such as Amber (Case et al., 2005), utilises these characteristics in force fields to explore protein conformational landscapes and calculate potential energies by resolving classical physics calculations.
These physics-based models can provide scoring for both protein stability or mutation-induced change of protein stability, however, they are still not fully scalable to large data-sets given the computational expense necessary for each simulated prediction. For example, conformation sampling via Monte Carlo simulations in Rosetta requires extensive compute time. On the other hand, machine learning-based predictors and, more recently, Deep learning (DL) approaches have shown improved scalability and, in some cases, comparable accuracy with physics-based models (Iqbal et al., 2021). This work will continue to explore the advantages of an entirely data-driven DL approach for predicting protein stability changes under multiple amino acid substitutions.
## 2 Related Work
In moving away from established molecular modelling approaches, machine learning methods EASE-MM (Folkman et al., 2016) and SAAFEC-SEQ (Li et al., 2021) leverage 1D sequences and protein evolutionary information to predict \(\Delta\Delta G\) with decision trees and Support Vector Machines, respectively. While ACDC-NN-Seq (Pancotti et al., 2021) explored utilising DL by applying Convolutional neural networks (CNNs) to protein sequences. As sequence data is more widely available than experimental structures, it is probable that the insight of these models into 3D structural characteristics, such as free energy of folding, is limited by their 1D representation. PON-tstab (Yang et al., 2018) implemented a combination of sequence and structure-based features in tabular format with random forests. DeepDDG (Cao et al., 2019) relies on tabular empirical features obtained from structure, such as solvent-accessible surface area, to predict stability with neural networks. However, tabular features engineered from structure are a restrictive depiction of protein geometry; graph-based approaches provide a promising alternative representation, with encouraging results when applied to protein structure prediction (Delaunay et al., 2022).
In particular, three DL models; ThermoNet (Li et al., 2020); RASP Blaabjerg et al. (2022); and ProS-GNN (Wang et al., 2021), have engaged in combining the two physico-scales involved in understanding protein geometry: the atomic scale and the residue scale of interactions. Both ThermoNet and RASP learn a representation of the atomic environment (AE) around the pertinent (mutated) residue using 3D CNNs before passing this representation through a Multi-layer perceptron (MLP) to score the mutational effect on protein stability. While obvious similarities exist between those two models, they are very different at their core. ThermoNet determines the AE representations on the fly, utilising both WT and simulated mutant structures as inputs for the MLP in the same loop. RASP initially trains a self-supervised AE embedder on a masked amino acid task, then uses this embedding as input features for a coupled WT and mutant amino acid encoding to feed a MLP trained on stability scoring. Moreover, ThermoNet is trained on a rather small experimental data-set (n \(\sim 3,500\)), while RASP is trained on a large data-set (n \(\sim 10^{6}\) for the AE embedder and n \(\sim 2.5\times 10^{5}\) for scoring) of Rosetta simulated scores, making it an emulator of the physics-based score. The third DL approach, ProS-GNN (Wang et al., 2021), replaced the CNN ThermoNet atomic environment embedding layer with GNNs. ProS-GNN also shares with ThermoNet and other DL models like ACDC-NN the constraint of being antisymmetric to reversed mutation. The aforementioned state-of-the-art stability prediction models in the literature share the following caveats:
1. Their underlying architecture allows only single amino acid substitutions.
2. Big experimental data-sets with the necessary structural data for these models are lacking.
Indeed, RASP is constrained to predicting on a fixed number of amino acid substitutions by the MLP scorer, which requires a fixed input shape; additional mutations increase the dimensions of the AE embedding to an incompatible size. In ThermoNet and ProS-GNN, the impossibility of decoupling the atomic and residue scales prohibits multiple amino acid substitutions; the required size of voxel or graph for multiple, even proximal, substitutions would be rapidly unmanageable.
A solution for both caveats exists. The self-supervised AE embedder of RASP already decouples the atomic and residue scales, and GNNs allow for some flexibility in graph topology, enabling consideration of multiple residues rather than only the embedding of the residue of interest. Integrating the RASP AE embedder with a graph-based approach would enable scoring of multiple substitution events. On the experimental data front, a new data-set, Mega-scale (Tsuboyama et al., 2022), based on high-throughput protein stability measurements, was published in late 2022. With over 600,000 data points of single and double mutants spanning over 300 WT structures, it provides a consistent
(in terms of experimental set-up) and large data-set, with the express purpose of training models to score the effects of single or double mutations on protein stability. In light of these observations, we contribute a JAX-implemented solution for resolving these constraints using two E(3) equivariant graph neural networks (EGNNs) (Garcia Satorras et al., 2021). The first EGNN is trained in a self-supervised way. The second is trained on the Mega-scale data set for scoring mutational effects on protein stability.
## 3 Method
### Atomic Environment (AE) Embedder
We followed the RASP protocol to design and train our AE embedder in a self-supervised masked amino acid manner, with two key differences:
1. We used an EGNN (Figure 2) with its own set of graph features describing the AE (Figure 1) instead of a CNN.
2. We used a macro averaged F1 score as our metric on the validation set to select model parameters from the highest-performing epoch.
The training and the validation sets are from the same data-set described in RASP (Blaabjerg et al., 2022). Our EGNN was built with layers described in Garcia Satorras et al. (2021), with an average message aggregation strategy (Equation 1). Recalling from Garcia Satorras et al. (2021) that \(\mathbf{h}^{\mathbf{l}}\) are node embeddings at layer \(l\) and \(\mathbf{x}^{\mathbf{l}}\) are coordinate embeddings at layer \(l\) (atoms coordinates), we defined the equivariant graph convolutional layer (EGCL), as they do, up to the use of this \(\frac{1}{N_{i}^{neighbors}}\) coefficient which allows the re-scaling of different messages according to the node of interest's number of neighbors (hence the average). As with their implementation, \(\phi_{e},\phi_{x},\phi_{h}\) are MLPs, \(a_{i;j}\) defines edge features between node \(i\) and \(j\), and finally, \(\mathfrak{N}(i)\) is the set of neighbors of node \(i\).
\[\mathbf{m}_{i,j} =\phi_{e}(\mathbf{h}^{\mathbf{l}}_{i},\mathbf{h}^{\mathbf{l}}_{j},\left\|\mathbf{x}^ {\mathbf{l}}_{i}-\mathbf{x}^{\mathbf{l}}_{j}\right\|^{2},a_{i,j}) \tag{1}\] \[\mathbf{x}^{\mathbf{l+1}}_{i} =\mathbf{x}^{\mathbf{l}}_{i}+\frac{1}{N_{i}^{neighbors}}\times\sum_{j\neq i,j\in\mathfrak{N}(i)}(\mathbf{x}^{\mathbf{l}}_{i}-\mathbf{x}^{\mathbf{l}}_{j})\phi_{x}(\mathbf{m} _{i,j})\] \[\mathbf{m}_{i} =\frac{1}{N_{i}^{neighbors}}\times\sum_{j\neq i,j\in\mathfrak{N }(i)}\mathbf{m}_{i,j}\] \[\mathbf{h}^{\mathbf{l+1}}_{i} =\phi_{h}(\mathbf{h}^{\mathbf{l}}_{i},\mathbf{m}_{i})\]
Node embeddings are passed sequentially through each \(N\) layer of the network. After each layer, node embeddings are copied, aggregated with an average graph level readout (global mean pooling) Equation 2, and saved. Finally, all the graph representations derived from the different layers are concatenated, Equation 2, to form the graph-level embedding \(\mathbf{h}_{\mathbf{G}}\) for the AE sub-graph of a residue \(G\), and processed through an MLP toward the desired prediction shape.
\[\mathbf{h}_{\mathbf{G}}=\texttt{Concat}(\texttt{Average}(\{\mathbf{h}^{\mathbf{l}}_{i}|i\in G \})|l=0,...,N) \tag{2}\]
For building the AE graph, we followed part of the RASP protocol:
* We considered only atoms within a 9A radius of the C\({}_{\alpha}\) of interest.
* We removed the atoms that were part of the residue of interest.
Nodes are atoms featureised with a single number (atomic number). Edges are drawn between nodes if two nodes are within a 4A distance. Edges are featureised by a binary label distinguishing whether the edge is intra- or inter-residue, as well as 2 numbers encoding a notion of the typical distance between the two atoms linked by these edges:
* The sum over the two atoms involved in the edge, of their covalent radius.
* The sum over the two atoms involved in the edge, of their Van der Waals radius.
In this particular instance of the model, distances between atoms are not directly encoded as an edge feature, but given the use of an EGCL, this distance is present as a distance vector (Equation 1) (rather than the usual scalar distance, hence the necessity of E(3) equivariance) by design. Finally, we trained the model on a classification task consisting of retrieving the amino acid around which the AE has been built. Model parameters were selected from the epoch with the best macro F1 scores on the validation set. A detailed description of the model in terms of its hyper-parameters is provided in the Appendix (Table 6).
### Mutant Stability Scoring
We used the same model architecture as presented for the AE embedding (Figure 2), for the regression task of predicting \(\Delta\Delta G\). The set of hyper-parameters differs as described in the Appendix (Table 7). For this task, the graph is built at the residue-level with additional atomic-level features to
Figure 1: Definition of the atomic environment (AE) graph.
Figure 2: Backbone of the E(3) equivariant graph neural network (EGNN) used for both AE embedding and scoring tasks. The EGNN layer is the EGCL taken from (Garcia Satorras et al., 2021).
bridge the gap between the two fundamental scales. Indeed, in this representation nodes are residues represented in terms of their spatial positioning by the residue mean atomic position coordinates. Nodes are featureised with the vector output of the previously trained AE embedder. More node features are included with an 11-dimensional representation of the physico-chemical properties of the WT amino acid (Kawashima et al., 2007; Xu et al., 2020), concatenated to the same representation, except for the mutated amino acid nodes. When a particular node is not mutated this concatenation is just twice the WT physico-chemical 11-dimensional representation.
At the edge level, an edge is drawn between two nodes if the mean atomic position distance between the two nodes is within 9A. The graph is centered around the mutant residue and residues are added given the distance threshold up to n (here 1) edges away from the mutant nodes. In the case of multiple mutants, we allow the different graphs centered around their mutant node, to be disconnected from each other. Features for the edges follows a similar strategy to the atomic graph:
* A single number to stipulate if the two residues are linked by a backbone, bound or not.
* Two numbers to provide a specific scale for the distance between two WT residues: the sum of the residue side chain sizes, defined as (i) the maximum distance between the C\({}_{\alpha}\) and any atoms of the residue; (ii) the maximum distance between two atoms from the same residue.
* The same two numbers are produced for mutants involved in the edges. When there are no mutants involved then they are just duplicated from the WT numbers.
* The C\({}_{\alpha}\)/C\({}_{\alpha}\) distance and the mean atomic position/mean atomic position distance.
Finally, to help the training while homogenizing the ranges and variance of the target variable (here the experimental \(\Delta\Delta G\)) we used a Fermi transform, also as described in RASP (Blaabjerg et al., 2022). Our loss function is a simple Root Mean Squared Error, and the best model as well as the best epoch is chosen based on the spearman rank correlation coefficient on the validation set.
Figure 3: Definition of the residue sub-graph build around mutated residues.
## 4 Results
### Atomic Environment Embedder
With our AE implementation, we reached a macro averaged F1 score of 0.63 on the training set and of 0.61 (accuracy = 0.65) on the validation set, which is comparable to the RASP 0.63 accuracy also on a similar but different validation set (we shuffled the structures), full results in Appendix Table 4. The confusion matrix on the validation set (Figure 9) is also provided in the Appendix and shows a variable but strong ability of the model to match ground truth.
### Mutant Stability Scorer
Evaluation metrics for the different splits are available in Figure 10, as well as a description of the Mega-scale data-set in the Appendix: A. Given the unique qualities of the Mega-scale data-set, we decided to evaluate the model in what we believe is a more stringent way than simply looking at the Mega-Scale test split (metrics are provided for the split too). Indeed, the Mega-scale data-set only contains domains and not full proteins, and structures were resolved computationally using AlphaFold (Jumper et al., 2021). The Mega-scale data-set also only contains up to double mutations. Hence, we decided to evaluate our model on a more standard data-set with experimentally resolved entire protein structures: ThermoMutDB (Xavier et al., 2021) (a description of the ThermoMutDB data-set is also provided in the Appendix: A.3).
Over the pooled ThermoMutDB data-set our scorer achieved RMSE = 2.288; Spearmanr = 0.311; Pearsonr = 0.251 Table 1. Interestingly the model seems to generalise well to structures with more than two mutations (Figure 4), for which it has not been directly trained. Spearman correlation, for example, spans a range between 0.159 and 0.381, for a number of mutations going from 1 to 3.
At the level of individual structures (Figure 5), model performances can also vary quite drastically from WTs with at least 100 mutants(for an overview per pdbse see Figure 11).
Finally, we also compared, for a subset of one-point mutations, our work to RASP (Figure 12). On pooled single mutations our proposed approach performs significantly worse (Pearsonr RASP = 0.53; Pearsonr for this work = 0.42 ). Yet our approach outperforms RASP for some pdbs, and suffers from the same drawbacks as RASP on some proteins; structures for which RASP poorly predicts mutational effect are also, with a few exceptions, poorly predicted by our method. Yet, overall our performance is still significant, even more so when put in perspective with the fact that RASP regression is an ensemble model.
Figure 4: Evaluation of our scorer for differing numbers of mutations. Purple markers for Spearman rank correlation p-value\(<\)0.05 else orange. Marker size is proportional to the number of mutations.
## 5 Discussion
These preliminary results show that the combination of decoupling of the atomic and residue scales, with the usage of an EGNN architecture, to allow flexibility on the number of mutations accessible to score, is promising. In realising this exploratory work we faced two main challenges:
1. The scorer had a tendency to over-fit the Mega-scale data-set.
2. The current choice of a threshold for the residue graph is constrained. We ended up choosing 9A, where depending on the residues, a typical length for such interactions could go to 16A or more (for two tryptophhans, given the max distance between their own atoms). But such a threshold would lead to a hyper-connected graph that would hinder the training. Generally speaking the graph building hyper-parameters, for example, the number of hops around the recovered nodes of interest (here one: neighbours one hop from the mutant nodes), would influence hyper-connectivity and our ability to not over-fit.
We believe both of those problems, particularly the latter point, could be partially alleviated by finding a better encoding of meaningful distances of interaction, as well as including a more appropriate way to sum messages within the message passing loop (Ying et al., 2021).
In terms of evaluating scoring performance, when exploring the ThermoMutDB data-set as a potential out-of-distribution test set, we realised that Mega-scale has a significant advantage compared to all of the other available data-sets; it is experimentally consistent, both for \(\Delta\Delta G\) measurement and the use of AlphaFold for structure prediction. This is not the case for ThermoMutDB which is an aggregate of results obtained with a variety of methods for both stability measurement and structure determination, making it a challenge to understand why and how the model is failing to give accurate predictions. Training on such a data-set which will, for example, not include certain types of interactions, such as inter-domain interactions, as well as not containing the inherent real noise of protein structure prediction, is advantageous for its consistency and an inconvenience for its representational inaccuracies when compared to more "realistic" data-sets. In terms of computational performance, as we are using GNNs, we recognize that we lose an important aspect of the RASP model which is Rapid by naming. Yet, as the most time-consuming part is the construction of the residue sub-graph (roughly 5 seconds for subgraphs of less than 96 nodes with 8 CPUs, an A100 GPU and vectorization/jit features within JAX) saving it once and slightly modifying it to include the specific mutations later on, makes the model very efficient at assessing scores for multiple combinations of mutants within a pre-defined set of positions.
Finally, since we decoupled the atomic and the residue scales, it is now possible to swap elements from other successful models: for example ThermoNet. This exposes a new bottleneck, or rather a new further challenge, as it implies the creation of a new data-set including structures for each mutant present in the Mega-scale data-set. That would also have been the case if one wanted to include anti-symmetry properties within the model.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \cline{2-7} \multicolumn{1}{c|}{} & **Training Set** & **Validation Set** & **Test Set** & **ThermoMutDB** \\ \hline
**Spearman r** & 0.754 & 0.518 & 0.442 & 0.311 \\ \hline
**Pearson r** & 0.758 & 0.562 & 0.412 & 0.251 \\ \hline
**RMSE** & 0.794 & 0.740 & 0.935 & 2.288 \\ \hline \end{tabular}
\end{table}
Table 1: Evaluation metrics for the \(\Delta\Delta G\) scorer
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline
**Number of mutations** & **1** & **2** & **3** & **4** & **5** & **6** & **7** & **9** \\ \hline
**Spearman r** & 0.349 & 0.159 & 0.381 & 0.012 & 0.271 & 0.714 & -0.500 & 1.000 \\ \hline
**Pearson r** & 0.342 & 0.077 & 0.346 & 0.079 & 0.202 & 0.613 & -0.242 & 1.000 \\ \hline
**RMSE** & 2.109 & 2.571 & 2.378 & 1.972 & 2.448 & 1.965 & 1.588 & 0.810 \\ \hline \end{tabular}
\end{table}
Table 2: Metrics for our scorer across different numbers of mutations.
## 6 Conclusion
In this work, we explored the possibility of using graph neural network models for scoring multiple substitution effects on protein stability. Our approach, based on the decoupling of atomic and residue scales by successively training two different scale-specific E-GNN models on massive experimental data-sets, shows promising results. Indeed, the model demonstrates an ability to predict effects of a variable number of mutations, even beyond what it has been trained on. Yet some key parameters of this modelling still need to be better understood; for example, a biologically reasonable edge distance threshold and an overall more appropriate way to handle connectivity in the created residue sub-graph.
Figure 5: Evaluation of our scorer on individual structures, PDBs Burley et al. (2017), chosen with at least 100 occurrences in the ThermoMutDB test data-set. All four structures have a significant prediction correlation (p-value\(<\)0.05) and the marker size is proportional to the number of mutations in the experiment. Further results breakdown in Table 3.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**PDB ID** & **1BNI** & **ISTN** & **1VQB** & **1RX4** \\ \hline
**Spearman r** & 0.503 & 0.462 & 0.519 & 0.479 \\ \hline
**Pearson r** & 0.456 & 0.456 & 0.523 & 0.453 \\ \hline
**RMSE** & 1.651 & 1.632 & 2.526 & 1.193 \\ \hline \end{tabular}
\end{table}
Table 3: Scorer performance metrics on proteins with over 100 data points, as shown in Figure 5. |
2306.06281 | Energy-Dissipative Evolutionary Deep Operator Neural Networks | Energy-Dissipative Evolutionary Deep Operator Neural Network is an operator
learning neural network. It is designed to seed numerical solutions for a class
of partial differential equations instead of a single partial differential
equation, such as partial differential equations with different parameters or
different initial conditions. The network consists of two sub-networks, the
Branch net and the Trunk net. For an objective operator G, the Branch net
encodes different input functions u at the same number of sensors, and the
Trunk net evaluates the output function at any location. By minimizing the
error between the evaluated output q and the expected output G(u)(y), DeepONet
generates a good approximation of the operator G. In order to preserve
essential physical properties of PDEs, such as the Energy Dissipation Law, we
adopt a scalar auxiliary variable approach to generate the minimization
problem. It introduces a modified energy and enables unconditional energy
dissipation law at the discrete level. By taking the parameter as a function of
time t, this network can predict the accurate solution at any further time with
feeding data only at the initial state. The data needed can be generated by the
initial conditions, which are readily available. In order to validate the
accuracy and efficiency of our neural networks, we provide numerical
simulations of several partial differential equations, including heat
equations, parametric heat equations and Allen-Cahn equations. | Jiahao Zhang, Shiheng Zhang, Jie Shen, Guang Lin | 2023-06-09T22:11:16Z | http://arxiv.org/abs/2306.06281v1 | # Energy-Dissipative Evolutionary Deep Operator Neural Networks
###### Abstract
Energy-Dissipative Evolutionary Deep Operator Neural Network is an operator learning neural network. It is designed to seek numerical solutions for a class of partial differential equations instead of a single partial differential equation, such as partial differential equations with different parameters or different initial conditions. The network consists of two sub-networks, the Branch net, and the Trunk net. For an objective operator \(\mathcal{G}\), the Branch net encodes different input functions \(u\) at the same number of sensors \(y_{i},i=1,2,\cdots,m\), and the Trunk net evaluates the output function at any location. By minimizing the error between the evaluated output \(q\) and the expected output \(\mathcal{G}(u)(y)\), DeepONet generates a good approximation of the operator \(\mathcal{G}\). In order to preserve essential physical properties of PDEs, such as the Energy Dissipation Law, we adopt a scalar auxiliary variable approach to generate the minimization problem. It introduces a modified energy and enables unconditional energy dissipation law in the discrete level. By taking the parameter as a function of the time \(t\) variable, this network can predict the accurate solution at any further time with feeding data only at the initial state. The data needed can be generated by the initial conditions, which are readily available. In order to validate the accuracy and efficiency of our neural networks, we provide numerical simulations of several partial differential equations, including heat equations, parametric heat equations, and Allen-Cahn equations.
A Article history:
Operator Learning Evolutionary Neural Networks Energy Dissipative Parametric equation Scalar auxiliary variable Deep learning
## 1 Introduction
Operator learning is a popular and challenging problem with potential applications across various disciplines. The opportunity to learn an operator over a domain in Euclidean spaces[1] and Banach spaces[2] opens a new class of problems in neural network design with generalized applicability. In application to solve partial differential equations(PDEs), operator learning has the potential to predict accurate solutions for the PDE by acquiring extensive prior
knowledge [3; 4; 5; 6; 7; 8; 9; 10; 11]. In a recent paper[12], Lu, Jin, and Karniadakis proposed an operator learning method with some deep operator networks, named as DeepONets. It is based on the universal approximation theorem [13; 14; 15]. The goal of this neural network is to learn an operator instead of a single function, which is usually the solution of a PDE. For any operator \(\mathcal{G}\) on a domain \(\Omega\), we can define \(\mathcal{G}\) as a mapping from \(\Omega^{*}\rightarrow\Omega^{*}\) with \(\mathcal{G}(u)(y)\in R\) for any \(y\in\Omega\). \(\mathcal{G}(u)(y)\) is the expected output of the neural network, which is usually a real number. The objective of the training is to obtain an approximation of \(\mathcal{G}\), where we need to represent operators and functions in a discrete form. In practice, it is very common to represent a continuous function or operator by the values evaluated at finite and enough locations \(\{x_{1},x_{2},\cdots,x_{m}\}\), which is called "sensors" in DeepONet. The network takes \([u(x_{1}),u(x_{2}),\cdots,u(x_{m})]\) and \(y\) as the input. The loss function is the difference between the output \(q\) and the expected output \(\mathcal{G}(u)(y)\). Generally, there are two kinds of DeepONet, Stacked DeepONet, and Unstacked DeepONet. The Stacked DeepONet consists of \(p\) branch networks and one trunk network. The number of the Trunk networks of the Unstacked DeepONet is the same as the DeepONet, but the Unstacked DeepONet merges all the \(p\) branch networks into a single one. An Unstacked DeepONet combines two sub-networks, Branch net, and Trunk net. The Branch net encodes the input function \(u\) at some sensors, \(\{x_{i}\in\Omega\,|\,i=1,\cdots,m\}\). The output of the Branch net consists of \(p\) neurons, where each neuron can be seen as a scalar, \(b_{j}=b_{j}(u(x_{1}),u(x_{2}),\cdots,u(x_{m}))\), \(j=1,2,\cdots,p\). The Trunk net encodes some evaluation points \(\{y_{k}\in\Omega|k=1,\cdots,n\}\), while the output also consists of \(p\) neurons and each neuron is a scalar \(g_{j}=g_{j}(y_{1},y_{2},\cdots,y_{n})\), \(j=1,2,\cdots,p\). The evaluation point \(y_{i}\) can be arbitrary in order to obtain the loss function. The number of neurons at the last layer of the Trunk net and the Branch net is the same. Hence, the output of the DeepONet can be written as an inner product of \((b_{1},b_{2},\cdots,b_{p})\) and \((g_{1},g_{2},\cdots,g_{p})\). In other words, the relationship between the expected output and the evaluated output is \(\mathcal{G}(u)(y)\approx\sum_{j=1}^{p}b_{j}g_{j}\). The DeepONet is an application of the Universal Approximation Theorem for Operator, which is proposed by Chen & Chen [16]:
**Theorem 1.1** (Universal Approximation Theorem for Operator).: _Suppose that \(\Omega_{1}\) is a compact set in \(X\), \(X\) is a Banach Space, \(V\) is a compact set in \(C(\Omega_{1})\), \(\Omega_{2}\) is a compact set in \(\boldsymbol{R}^{d}\), \(\sigma\) is a continuous non-polynomial function, \(\mathcal{G}\) is a nonlinear continuous operator, which maps \(v\) into \(C(\Omega_{2})\), then for any \(\epsilon>0\), there are positive integers \(M,N,m\), constants \(c_{i}^{k},\zeta_{k},\xi_{ij}^{k}\in\boldsymbol{R}\), points \(\omega_{k}\in\boldsymbol{R}^{n},x_{j}\in K_{1},i=1,\cdots,M\), \(k=1,\cdots,N,j=1,\cdots,m\), such that_
\[|\,\mathcal{G}(u)(y)-\sum_{k=1}^{N}\sum_{i=1}^{M}c_{i}^{k}\sigma\left(\sum_{j =1}^{m}\xi_{ij}^{k}u\left(x_{j}\right)+\theta_{i}^{k}\right)\cdot\sigma\left( \omega_{k}\cdot y+\zeta_{k}\right)|<\epsilon\]
_holds for all \(u\in V\) and \(y\in\Omega_{2}\)._
For any time-dependent PDE, the training data is the form of \((u,y,\mathcal{G}(u)(y))\), where \(u\) in the discrete form can be represented as \([u(x_{1}),u(x_{2}),\cdots,u(x_{m})]\) in the neural network. In the original paper, they used the classic FNN[17] as the baseline model. For dynamic systems, various network architectures are used, including residual networks[18], convolutional NNs(CNNs)[19; 20], recurrent NNs(RNNs)[21], neural jump stochastic differential equations[22] and neural ordinary differential equations[23]. The training performance is very promising. It predicts accurate solutions of many nonlinear ODEs and PDEs, including the simple dynamic system, gravity pendulum system, and diffusion-reaction system. However, the training data need to be generated at each time step, so it is very expensive to train the network. For a lot of initial value problems, there is no any information of \(u(x,t)\) except \(t=0\). It is very natural to raise a question: _Can we learn an operator of a kind of time-dependent PDEs with only initial conditions?_
Inspired by the Evolutionary Deep Neural Network(EDNN)[24], it is more convenient to learn an operator at a fixed time instead of an operator with not only spatial variables but also a time variable. With the loss of generality, we can take the time variable \(t\) to be \(0\) in initial value problems. Once obtained the operator at the initial time, many traditional numerical methods can be used to update the solution. More specifically, assuming that the initial condition operator has been trained well, we can consider the parameters of the Branch net and the Trunk net as a function with respect to the time variable as shown in Figure 1. More specifically, for a given initial value problem,
\[\begin{cases}\dfrac{\partial u}{\partial t}=s(u)\\ u(x,0)=f(x),\quad x\in\omega\end{cases} \tag{1}\]
the objective is to approximate the operator \(\mathcal{G}:u\mapsto\mathcal{G}(u)\). The input is \(([u(x_{1}),u(x_{2}),\cdots,u(x_{m})],y,\mathcal{G}(u)(y))\), where \([x_{1},x_{2},\cdots,x_{m}]\) are the sensors and \(\mathcal{G}(u)(y)=f(y)\). The training process at the initial step is the same as the DeepONet,
so we can use the same architecture to train the initial condition operator. The output of the Branch net can be written as \(\mathbf{b}=\mathbf{b}(u(x_{1},0),u(x_{2},0),\cdots,u(x_{m_{1}},0))=\mathbf{b}^{\prime}(x_{1}, x_{2},\cdots,x_{m};W_{1})\), where \(W_{1}\) are the parameters in the Branch net. The output of the Trunk net can be written as \(\mathbf{g}=\mathbf{g}(y;W_{2})\), where \(W_{2}\) are the parameters in the Trunk net. Once trained well, we will regard the parameters as a function of \(t\) and \(W_{1}\), \(W_{2}\) as the initial conditions of \(W_{1}(t)\) and \(W_{2}(t)\). By the architecture of the Unstacked DeepONet, we can write the solution at initial time \(t_{0}=0\) as
\[u(x,t_{0})\approx\sum_{j=1}^{p}b_{j}g_{j}=\mathbf{b}^{T}\mathbf{g}\text{ for any given initial condition }f(x) \tag{2}\]
We do not need any more data to obtain the approximation of \(u(x,t_{1})\). \(u(x,t_{1})\) should be consistent with \(W_{1}(t_{1})\) and \(W_{2}(t_{1})\). With the idea of the numerical solver for PDEs, it is easy to obtain \(W_{1}(t_{1})\) and \(W_{2}(t_{1})\) if \(\frac{\partial W_{1}}{\partial t}\) and \(\frac{\partial W_{2}}{\partial t}\) are known. The time derivative of the solution \(u\) can be written by a chain rule:
\[\frac{\partial u}{\partial t}=\frac{\partial u}{\partial W}\frac{\partial W}{ \partial t} \tag{3}\]
where \(W\) consists of \(W_{1}\) and \(W_{2}\). \(\frac{\partial W}{\partial t}\) can be solved by a least square problem. Once we get \(\frac{\partial W}{\partial t}\), we can use any traditional time discretization schemes to get \(W^{n+1}\) with \(W^{n}\).
Fig. 1: Energy-Dissipative Evolutionary Deep Operator Neural Network. The yellow block represents input at sensors and the blue block represents subnetworks. The green blocks represent the output of the subnetworks and also the last layer of the EDE-DeepONet. The difference between the stacked and unstacked EDE-DeepONet is the number of Branch nets. In the right minimization problem, the energy term \(r^{2}\) can be shown to be dissipative, i.e. \((r^{n+1})^{2}\leq(r^{n})^{2}\), where \(\mathcal{J}(\gamma_{1},\gamma_{2})=\frac{1}{2}\left\|\sum_{k=1}^{p}\frac{ \partial g_{k}(W_{1}^{2})}{\partial W_{1}^{2}}\gamma_{1}\mathbf{b}_{k}(W_{2}^{n} )+\sum_{k=1}^{p}g_{k}(W_{1}^{n})\frac{\partial g_{k}(W_{2}^{n})}{\partial W_{ 2}^{n}}\gamma_{2}-\frac{r^{n+1}}{\sqrt{E(w^{n})}}N_{x}(\mathbf{u}^{n})\right\|_{2}^ {2}\).
The choice of the traditional time discretization scheme is dependent on the specific problem. The Euler or Runge-Kutta methods are commonly used in the evolutionary network. We are going to introduce a method with unconditional energy dissipation, which is the Energy-Dissipative Evolutionary Deep Operator Neural Network(EDE-DeepONet). Many kinds of PDEs are derived from basic physical laws, such as Netwon's Law, Conservation Law and Energy Dissipation Law. In many areas of science and engineering, particularly in the field of materials science, gradient flows are commonly employed in mathematical models[25; 26; 27; 28; 29; 30; 31; 32]. When approximating the solution of a certain PDE, it is desirable to satisfy these laws. We consider a gradient flow problem,
\[\frac{\partial u}{\partial t}=-\frac{\delta E}{\delta u}, \tag{4}\]
where \(E\) is a certain free energy functional. Since the general explicit Euler method does not possess the unconditionally dissipative energy dissipation law, we applied a scalar auxiliary variable(SAV) method[33] to generate the required least square problem. It introduces a new modified energy and the unconditionally dissipative modified energy dissipation law is satisfied for each iterative step. SAV method has been applied to solve plenty of PDEs with thermodynamically consistent property. It is robust, easy to implement and accurate to predict the solution. Introducing this method to neural network helps us explore how to combine neural network models and physical laws.
The objectives of this article is:
* Designing an operator learning neural network without data except the given information.
* Predicting solutions of parametric PDEs after a long time period.
* Keeping energy dissipative property of a dynamic system.
Our main contributions are:
* Constructing an evolutionary operator learning neural network to solve PDEs.
* Solving a kind of PDEs with different parameters in a single neural network.
* Introducing the modified energy in the neural network and applying SAV algorithm to keep the unconditionally modified energy dissipation law.
* Introducing an adaptive time stepping strategy and restart strategy in order to speed the training process.
The organization of this paper is as follows: In Section 2, we introduce the Evolutionary Deep Operator Neural Network for a given PDE problem. In Section 3, we consider the physics law behind the gradient flow problem and apply the SAV method to obtain the energy dissipation law. We proposed a new architecture for neural network, EDE-DeepONet. In Section 4, we presented two adaptive time stepping strategies, where the second one is called restart in some cases. In Section 5, we generally introduced the architecture of the EDE-DeepONet. In Section 6, we implement our neural network to predict solutions of heat equations, parametric heat equations, and Allen-Cahn equations to show the numerical results.
## 2 Evolutionary Deep Operator Neural Network
Consider a general gradient flow problem,
\[\begin{split}&\frac{\partial\mathbf{u}}{\partial t}+\mathcal{N}_{x}( \mathbf{u})=0\\ &\mathbf{u}(\mathbf{x},0)=\mathbf{f}(\mathbf{x})\end{split} \tag{5}\]
where \(\mathbf{u}\in\mathbf{R}^{l}\), \(\mathcal{N}_{x}(\mathbf{u})\) can be written as a variational derivative of a free energy functional \(E[u(\mathbf{x})]\) bounded from below, \(\mathcal{N}_{x}(\mathbf{u})=\frac{\delta E}{\delta u}.\) The first step is to approximate the initial condition operator with DeepONet.
### Operator learning
For an operator \(\mathcal{G}\), \(\mathcal{G}:\mathbf{u}(\mathbf{x})\mapsto\mathbf{f}(\mathbf{x})\), the data feed into the DeepONet is in the form \((\mathbf{u},y,\mathcal{G}(\mathbf{u})(y))\). It is obtained by the given initial conditions. The branch network takes \([\mathbf{u}(\mathbf{x}_{1}),\mathbf{u}(\mathbf{x}_{2}),\cdots,\mathbf{u}(\mathbf{x}_{m})]^{T}\) as the input, which is the numerical representation of \(\mathbf{u}\), and \([\mathbf{b}_{1},\mathbf{b}_{2},\cdots,\mathbf{b}_{p}]^{T}\in\mathbf{R}^{p\times}\), where \(\mathbf{b}_{k}\in\mathbf{R}^{l}\) for \(k=1,2,\cdots,p\), as outputs. The trunk network takes \(\mathbf{y}\) as the input and \([g_{1},g_{2},\cdots,g_{p}]\in\mathbf{R}^{p}\) as outputs. The Unstacked DeepONet net uses FNN as the baseline model and concatenate the function value at sensor locations and the evaluated point together, i.e. \([\mathbf{u}(\mathbf{x}_{1}),\mathbf{u}(\mathbf{x}_{2}),\cdots,\mathbf{u}(\mathbf{x}_{m}),\mathbf{y}]^{T}\). As the equation in the Universal Approximation Theorem for Operators, we can take the product of \(\mathbf{h}\) and \(t\), then we obtain:
\[\mathcal{G}(\mathbf{u})(\mathbf{x})\approx\sum_{k=1}^{p}g_{k}\mathbf{b}_{k} \tag{6}\]
The activation functions are applied to the trunk net in the last layer. There is no bias in this network. However, according to the theorem 1, the generalization error can be reduced by adding bias. We also give the form with bias \(\mathbf{b}_{0}\):
\[\mathcal{G}(\mathbf{u})(\mathbf{x})\approx\sum_{k=1}^{p}g_{k}\mathbf{b}_{k}+\mathbf{b}_{0} \tag{7}\]
As mentioned before, we assumed the initial condition operator has been trained very well. We are going to find the update rule of the parameters to evolve the neural network.
### The evolution of parameters in the neural network
Denoting the parameters in the branch network as \(W_{1}\) and the parameters in the trunk network as \(W_{2}\), \(W_{1}\) and \(W_{2}\) can be regarded a function of \(t\) since they change in every time step. According to the derivative's chain rule, we have
\[\frac{\partial\mathbf{u}}{\partial t}=\frac{\partial\mathbf{u}}{\partial W_{1}}\frac {\partial W_{1}}{\partial t}+\frac{\partial\mathbf{u}}{\partial W_{2}}\frac{ \partial W_{2}}{\partial t} \tag{8}\]
Since \(\mathbf{u}=\sum_{k=1}^{p}g_{k}\mathbf{b}_{k}=\sum_{k=1}^{p}g_{k}(W_{1}(t))\mathbf{b}_{k}( W_{2}(t))\), then
\[\frac{\partial\mathbf{u}}{\partial t}=\sum_{k=1}^{p}\frac{\partial g_{k}(W_{1}(t) )}{\partial W_{1}}\frac{\partial W_{1}}{\partial t}\mathbf{b}_{k}(W_{2}(t))+\sum_ {k=1}^{p}g_{k}(W_{1}(t))\frac{\partial\mathbf{b}_{k}(W_{2}(t))}{\partial W_{2}} \frac{\partial W_{2}}{\partial t} \tag{9}\]
Our objective is to obtain \(\frac{\partial W_{1}}{\partial t}\) and \(\frac{\partial W_{2}}{\partial t}\), the update rule for parameters. It is equivalent to solve a minimization problem,
\[\left[\frac{\partial W_{1}}{\partial t};\frac{\partial W_{2}}{\partial t} \right]=\text{argmin}\mathcal{J}(\gamma_{1},\gamma_{2}) \tag{10}\]
where
\[\mathcal{J}(\gamma_{1},\gamma_{2})=\frac{1}{2}\left\|\sum_{k=1}^{p}\frac{ \partial g_{k}(W_{1}(t))}{\partial W_{1}}\gamma_{1}\mathbf{b}_{k}(W_{2}(t))+\sum_ {k=1}^{p}g_{k}(W_{1}(t))\frac{\partial\mathbf{b}_{k}(W_{2}(t))}{\partial W_{2}} \gamma_{2}-\mathcal{N}_{\mathbf{x}}(\mathbf{u})\right\|_{2}^{2} \tag{11}\]
In this article, the inner product \((a,b)\) is defined in the integral sense, \((a,b)=\int_{\Omega}a(\mathbf{x})b(\mathbf{x})\,\mathrm{d}\mathbf{x}\) and the \(L_{2}\) norm is defined as \(\left\|a\right\|_{2}^{2}=\int_{\Omega}|a(\mathbf{x})|^{2}\,\mathrm{d}\mathbf{x}\).
The minimization problem can be transformed into a linear system by the first-order optimal condition:
\[\frac{\partial\mathcal{J}}{\partial\gamma_{1}}=\int_{\Omega}\!\! \left(\sum_{k=1}^{p}\frac{\partial g_{k}(W_{1}(t))}{\partial W_{1}}\mathbf{b}_{k}( W_{2}(t))\right)^{T}\left(\gamma_{1}\sum_{k=1}^{p}\frac{\partial g_{k}(W_{1}(t))}{ \partial W_{1}}\mathbf{b}_{k}(W_{2}(t))+\sum_{k=1}^{p}g_{k}(W_{1}(t))\frac{ \partial\mathbf{b}_{k}(W_{2}(t))}{\partial W_{2}}\gamma_{2}-\mathcal{N}_{\mathbf{x}}( \mathbf{u})\right)\!\mathrm{d}\mathbf{x}=0 \tag{12}\] \[\frac{\partial\mathcal{J}}{\partial\gamma_{2}}=\int_{\Omega}\!\! \left(\sum_{k=1}^{p}g_{k}(W_{1}(t))\frac{\partial\mathbf{b}_{k}(W_{2}(t))}{ \partial W_{2}}\right)^{T}\!\left(\gamma_{1}\sum_{k=1}^{p}\frac{\partial g_{k}(W _{1}(t))}{\partial W_{1}}\mathbf{b}_{k}(W_{2}(t))+\sum_{k=1}^{p}g_{k}(W_{1}(t)) \frac{\partial\mathbf{b}_{k}(W_{2}(t))}{\partial W_{2}}\gamma_{2}-\mathcal{N}_{\bm {x}}(\mathbf{u})\right)\!\mathrm{d}\mathbf{x}=0 \tag{13}\]
In this system, the gradient with respect to \(W_{1}(t)\) and \(W_{2}(t)\) can be computed by automatic differentiation at each time step. By denoting
\[(\mathbf{J_{1}})_{ij_{1}} =\sum_{k=1}^{p}\frac{\partial g_{k}(W_{1}(t))}{\partial W_{1}^{ji} }\mathbf{b}_{k}^{i}(W_{2}(t)) \tag{14}\] \[(\mathbf{J_{2}})_{ij_{2}} =\sum_{k=1}^{p}g_{k}(W_{1}(t))\frac{\partial\mathbf{b}_{k}^{i}(W_{2}( t))}{\partial W_{2}^{ji_{2}}}\] (15) \[(\mathbf{N})_{i} =\mathcal{N}\left(\mathbf{u}_{x}^{i}\right) \tag{16}\]
where \(i=1,2,\cdots,l\), \(j_{1}=1,2,\cdots,N_{\text{para}}^{b}\), \(j_{2}=1,2,\cdots,N_{\text{para}}^{t}\). \(N_{\text{para}}^{b}\) is the number of parameters in Branch net and \(N_{\text{para}}^{t}\) is the number of parameters in Trunk net. \(\mathbf{N}\) is generated by the DeepONet, so it can be evaluated at any spatial point. The above integrals can be approximated by numerical methods:
\[\frac{1}{|\Omega|}\int_{\Omega}\left(\sum_{k=1}^{p}\frac{\partial g _{k}(W_{1}(t))}{\partial W_{1}}\mathbf{b}_{k}(W_{2}(t))\right)^{T}\left(\sum_{k=1} ^{p}\frac{\partial g_{k}(W_{1}(t))}{\partial W_{1}}\mathbf{b}_{k}(W_{2}(t)) \right)\mathrm{d}\mathbf{x} =\lim_{l\to\infty}\frac{1}{l}\mathbf{J_{1}^{T}}\mathbf{J_{1}} \tag{17}\] \[\frac{1}{|\Omega|}\int_{\Omega}\left(\sum_{k=1}^{p}g_{k}(W_{1}(t ))\frac{\partial\mathbf{b}_{k}(W_{2}(t))}{\partial W_{2}}\right)^{T}\left(\sum_{k= 1}^{p}g_{k}(W_{1}(t))\frac{\partial\mathbf{b}_{k}(W_{2}(t))}{\partial W_{2}} \right)\mathrm{d}\mathbf{x} =\lim_{l\to\infty}\frac{1}{l}\mathbf{J_{2}^{T}}\mathbf{J_{2}}\] (18) \[\frac{1}{|\Omega|}\int_{\Omega}\left(\sum_{k=1}^{p}\frac{\partial g _{k}(W_{1}(t))}{\partial W_{1}}\mathbf{b}_{k}(W_{2}(t))\right)^{T}\left(\mathcal{ N}_{\mathbf{x}}(\mathbf{u})\right)\mathrm{d}\mathbf{x} =\lim_{l\to\infty}\frac{1}{l}\mathbf{J_{1}^{T}}\mathbf{N} \tag{19}\]
By denoting \(\gamma_{i}^{opt}\) as optimal values of \(\gamma_{i}\), \(i=1,2\), the objective function can be reduced to
\[\mathbf{J_{1}^{T}}\left(\gamma_{1}^{opt}\mathbf{J_{1}}+\gamma_{2 }^{opt}\mathbf{J_{2}}-\mathbf{N}\right) =0 \tag{20}\] \[\mathbf{J_{2}^{T}}\left(\gamma_{1}^{opt}\mathbf{J_{1}}+\gamma_{2 }^{opt}\mathbf{J_{2}}-\mathbf{N}\right) =0 \tag{21}\]
The feasible solutions of the above equations are the approximated time derivatives of \(W_{1}\) and \(W_{2}\).
\[\frac{dW_{1}}{dt} =\gamma_{1}^{opt} \tag{22}\] \[\frac{dW_{2}}{dt} =\gamma_{2}^{opt} \tag{23}\]
where the initial conditions \(W_{1}^{0}\) and \(W_{2}^{0}\) can be determined by DeepONets for initial condition operators. The two ODEs are the updated rules in the neural networks. The simple way to solve them is the explicit Euler method.
\[\frac{W_{1}^{n+1}-W_{1}^{n}}{\Delta t} =\gamma_{1}^{opt} \tag{24}\] \[\frac{W_{2}^{n+1}-W_{2}^{n}}{\Delta t} =\gamma_{2}^{opt} \tag{25}\]
The neural network can calculate the solution of given PDEs at any time step \(t_{n}\) and spatial point \(\mathbf{x}_{i}\) by weights \(W_{1}^{n}\), \(W_{2}^{n}\), spatial points \(\mathbf{x}\) and initial condition \(\mathbf{u}(\mathbf{x})\).
## 3 Energy Dissipative Evolutionary Deep Operator Neural Network
Let's reconsider the given problem.
\[\frac{\partial\mathbf{u}}{\partial t}+\mathcal{N}_{x}(\mathbf{u})=0 \tag{26}\] \[\mathbf{u}(\mathbf{x},0)=\mathbf{f}(\mathbf{x})\]
where \(\mathbf{u}\in\mathbf{R}^{l}\), \(\mathcal{N}_{\mathbf{x}}(\mathbf{u})\) can be written as a variational derivative of a free energy functional \(E[\mathbf{u}(\mathbf{x})]\) bounded from below, \(\mathcal{N}_{\mathbf{x}}(\mathbf{u})=\frac{\delta E}{\delta\mathbf{u}}\). Taking the inner product with \(\mathcal{N}_{\mathbf{x}}(\mathbf{u})\) of the first equation, we obtain the energy dissipation property
\[\frac{dE[\mathbf{u}(\mathbf{x})]}{dt}=\left(\frac{\delta E}{\delta\mathbf{u}},\frac{ \partial\mathbf{u}}{\partial t}\right)=\left(\mathcal{N}_{\mathbf{x}}(\mathbf{u}),\frac{ \partial\mathbf{u}}{\partial t}\right)=-\left(\mathcal{N}_{\mathbf{x}}(\mathbf{u}), \mathcal{N}_{\mathbf{x}}(\mathbf{u})\right)\leq 0 \tag{27}\]
However, it is usually hard for a numerical algorithm to be efficient as well as energy dissipative. Recently, the SAV approach [33] was introduced to construct numerical schemes which is energy dissipative (with a modified energy), accurate, robust and easy to implement. More precisely, assuming \(E[\mathbf{u}(\mathbf{x})]>0\), it introduces a \(r(t)=\sqrt{E[\mathbf{u}(\mathbf{x},t)]}\), and expands the gradient flow problem as
\[\begin{split}&\frac{\partial\mathbf{u}}{\partial t}=-\frac{r}{\sqrt{E( \mathbf{u})}}\mathcal{N}_{\mathbf{x}}\left(\mathbf{u}\right)\\ & r_{t}=\frac{1}{2\sqrt{E(\mathbf{u})}}\left(\mathcal{N}_{\mathbf{x}} \left(\mathbf{u}\right),\frac{\partial\mathbf{u}}{\partial t}\right)\end{split} \tag{28}\]
With \(r(0)=\sqrt{E[\mathbf{u}(\mathbf{x},t)]}\), the above system has a solution \(r(t)\equiv\sqrt{E[\mathbf{u}(\mathbf{x},t)]}\) and \(\mathbf{u}\) being the solution of the original problem.
### First order scheme
By setting \(\mathbf{u}^{n}=\sum_{k=1}^{p}g_{k}\mathbf{b}_{k}\), a first order scheme can be constructed as
\[\begin{split}&\frac{\mathbf{u}^{n+1}-\mathbf{u}^{n}}{\Delta t}=-\frac{r ^{n+1}}{\sqrt{E(\mathbf{u}^{n})}}\mathcal{N}_{\mathbf{x}}(\mathbf{u}^{n})\\ &\frac{r^{n+1}-r^{n}}{\Delta t}=\frac{1}{2\sqrt{E(\mathbf{u}^{n})}} \int_{\Omega}\mathcal{N}_{\mathbf{x}}(\mathbf{u}^{n})\frac{\mathbf{u}^{n+1}-\mathbf{u}^{n}}{ \Delta t}dx.\end{split} \tag{29}\]
This is a coupled system of equations for \((r^{n+1},\mathbf{u}^{n+1})\). But it can be easily decoupled as follows. Plugging the first equation into the second one, we obtain:
\[\frac{r^{n+1}-r^{n}}{\Delta t}=-\frac{r^{n+1}}{2E(\mathbf{u}^{n})}\left\|\mathcal{ N}_{\mathbf{x}}(\mathbf{u}^{n})\right\|^{2}, \tag{30}\]
which implies
\[r^{n+1}=\left(1+\frac{\Delta t}{2E(\mathbf{u}^{n})}\left\|\mathcal{N}_{\mathbf{x}}(\bm {u}^{n})\right\|^{2}\right)^{-1}r^{n} \tag{31}\]
**Theorem 3.1** (Discrete Energy Dissipation Law).: _With the modified energy define above, the scheme is unconditionally energy stable, i.e._
\[(r^{n+1})^{2}-(r^{n})^{2}\leq 0. \tag{32}\]
**Proof 3.1**.: _Taking the inner product of the first equation with \(\frac{r^{n+1}}{\sqrt{E(\mathbf{u}^{n})}}\mathcal{N}_{\mathbf{x}}(\mathbf{u}^{n})\) and the second equation with \(2r^{n+1}\)_
\[\begin{split}(r^{n+1})^{2}-(r^{n})^{2}&=2r^{n+1}( r^{n+1}-r^{n})-(r^{n+1}-r^{n})^{2}\\ &=\frac{\Delta r^{n+1}}{\sqrt{E(\mathbf{u}^{n})}}\int_{\Omega} \mathcal{N}_{\mathbf{x}}(\mathbf{u}^{n})\frac{\mathbf{u}^{n+1}-\mathbf{u}^{n}}{\Delta t}dx-( r^{n+1}-r^{n})^{2}\\ &=-\left(\frac{r^{n+1}}{\sqrt{E(\mathbf{u}^{n})}}\right)^{2}\int_{ \Omega}\mathcal{N}_{\mathbf{x}}(\mathbf{u}^{n})\mathcal{N}_{\mathbf{x}}(\mathbf{u}^{n})dx-(r^{ n+1}-r^{n})^{2}\\ &\leq 0\end{split} \tag{33}\]
In order to maintain the modified energy dissipation law in the evolution neural network, we only need to replace \(\mathcal{N}_{\mathbf{x}}(\mathbf{u})\) by \(\frac{r^{n+1}}{\sqrt{E(\mathbf{u})}}\mathcal{N}_{\mathbf{x}}(\mathbf{u})\) in section 2. The update rule of the neural network is
\[\left[\frac{\partial W_{1}}{\partial t};\,\frac{\partial W_{2}}{\partial t} \right]=\text{argmin}\mathcal{J}(\gamma_{1},\gamma_{2}) \tag{34}\]
where
\[\mathcal{J}(\gamma_{1},\gamma_{2})=\frac{1}{2}\left\|\sum_{k=1}^{p}\frac{\partial g _{k}(W_{1}^{n})}{\partial W_{1}^{n}}\gamma_{1}\mathbf{b}_{k}(W_{2}^{n})+\sum_{k=1}^ {p}g_{k}(W_{1}^{n})\frac{\partial\mathbf{b}_{k}(W_{2}^{n})}{\partial W_{2}^{n}} \gamma_{2}-\frac{r^{n+1}}{\sqrt{E(\mathbf{u}^{n})}}\mathcal{N}_{\mathbf{x}}(\mathbf{u}^{n} )\right\|_{2}^{2} \tag{35}\]
The corresponding linear system of the first order optimal condition is
\[\mathbf{J}_{1}^{\rm T}\left(\gamma_{1}^{opt}\mathbf{J}_{1}+\gamma_{2}^{ opt}\mathbf{J}_{2}-\frac{r^{n+1}}{\sqrt{E(\mathbf{u}^{n})}}\mathbf{N}\right)=0 \tag{36}\] \[\mathbf{J}_{2}^{\rm T}\left(\gamma_{1}^{opt}\mathbf{J}_{1}+\gamma_{2}^{ opt}\mathbf{J}_{2}-\frac{r^{n+1}}{\sqrt{E(\mathbf{u}^{n})}}\mathbf{N}\right)=0 \tag{37}\]
where
\[(\mathbf{J}_{1})_{ij_{1}} =\sum_{k=1}^{p}\frac{\partial g_{k}(W_{1}^{n})}{\partial W_{1}^{ n,j_{1}}}\mathbf{b}_{k}^{i}(W_{2}^{n}) \tag{38}\] \[(\mathbf{J}_{2})_{ij_{2}} =\sum_{k=1}^{p}g_{k}(W_{1}^{n})\frac{\partial\mathbf{b}_{k}^{i}(W_{2} ^{n})}{\partial W_{2}^{n,j_{2}}}\] (39) \[(\mathbf{N})_{i} =\mathcal{N}\left(\mathbf{u}_{\mathbf{x}}^{i}\right) \tag{40}\]
and \(i=1,2,\cdots,l\), \(j_{1}=1,2,\cdots,N_{\text{para}}^{b}\), \(j_{2}=1,2,\cdots,N_{\text{para}}^{r}\). \(N_{\text{para}}^{b}\) is the number of parameters in Branch net and \(N_{\text{para}}^{t}\) is the number of parameters in Trunk net. After getting \(\gamma_{1}^{opt}\) and \(\gamma_{2}^{opt}\), \(W^{n+1}\) can be obtained by the Forward Euler method as equation (24) and (25).
\[W_{1}^{n+1} =W_{1}^{n}+\gamma_{1}^{opt}\Delta t \tag{41}\] \[W_{2}^{n+1} =W_{2}^{n}+\gamma_{2}^{opt}\Delta t \tag{42}\]
## 4 Adaptive time stepping strategy and Restart strategy
One of the advantages of an unconditionally stable scheme is that the adaptive time step can be utilized. Since the coefficient of \(N_{x}\), \(\frac{r^{n+1}}{\sqrt{E^{n}}}\) should be around 1, by denoting \(\xi^{n+1}=\frac{r^{n+1}}{\sqrt{E^{n}}}\), larger \(\Delta t\) is allowed when \(\xi\) is close to 1 and the smaller \(\Delta t\) is needed when \(\xi\) is far away from 1. Thus, a simple adaptive time-stepping strategy can be described as follows:
```
1. Set the tolerance for \(\xi\) as \(\epsilon_{0}\) and \(\epsilon_{1}\), the initial time step \(\Delta t\), the maximum time step \(\Delta t_{max}\) and the minimum time step \(\Delta t_{min}\)
2. Compute \(u^{n+1}\).
3. Compute \(\xi^{n+1}=\frac{r^{n+1}}{\sqrt{E^{n}}}\).
4. If\(|1-\xi^{n+1}|>\epsilon_{0}\), Then\(\Delta t=\max(\Delta t_{min},\Delta t/2)\); Else if\(|1-\xi^{n+1}|<\epsilon_{1}\), Then\(\Delta t=\min(\Delta t_{max},2\Delta t)\). Go to Step 2.
5. Update time step \(\Delta t\).
```
**Algorithm 1** Adaptive time stepping strategy
Another popular strategy to keep \(r\) approximating the original energy \(E\) is to reset the SAV \(r^{n+1}\) to be \(E^{n+1}\) in some scenarios. The specific algorithm is as following:
```
1. Set the tolerance for \(\epsilon_{0}\), \(\epsilon_{1}\) should be some small tolerance, usually \(10^{-1}\) and \(10^{-3}\). The choices for \(\Delta t_{max}\) and \(\Delta t_{min}\) are quite dependent on \(\Delta t\), usually \(\Delta t_{max}=10^{3}\times\Delta t\) and \(\Delta t_{min}=10^{-3}\times\Delta t\). In Algorithm 2, we usually take \(\epsilon_{2}\) as \(2\times 10^{-2}\).
2. Feed \([u(x_{1}),u(x_{2}),\cdots,u(x_{m})]\) into the branch network and \(y\in Y\) into the trunk network. Denote the output of the DeepONet as \(q\).
3. Update the parameters in the DeepONet by minimizing a cost function, where the cost function can be taken as the mean squared error as \(\frac{1}{|Y|}\sum_{y\in Y}\|\mathcal{G}(u)(y)-q\|^{2}\).
4. Once the DeepONet has been trained well, solve the system of equations of (36) and (37) to obtain \(\left[\frac{\partial W_{1}}{\partial t};\frac{\partial W_{2}}{\partial t}\right]\).
5.The value of \(\left[\frac{\partial W_{1}}{\partial t};\frac{\partial W_{2}}{\partial t}\right]\) can be obtained in the current step. Since the parameters \(W_{1}^{n}\) in the branch network, and \(W_{2}^{n}\) in the trunk network are known, \(W_{1}^{n+1}\) and \(W_{2}^{n+1}\) for the next step can be also obtained by the Forward Euler method or Runge-Kutta method.
6. Repeat step 5 until the final time \(T\), where \(T=t_{0}+s\Delta t\), \(t_{0}\) is the initial time of the given PDE, \(\Delta t\) is the time step in step 5 and \(s\) is the number of repeated times of step 5.
7. Output the solution at time \(T\) in the DeepONet with parameters obtained in step 6.
```
**Algorithm 2** Restart strategy
## 6 Numerical Experiments
In this section, we implement EDE-DeepONet to solve heat equations, parametric heat equations, and Allen-Cahn equations to show its performance and accuracy.
### Example 1: Simple heat equations
To show the accuracy of the EDE-DeepONet, we start with the simple heat equation with different initial conditions since we already have the exact solution. A 1D heat equation system can be described by
\[u_{t}=u_{xx} \tag{43}\] \[u(x,0)=f\] (44) \[u(0,t)=u(2,t)=0 \tag{45}\]
By the method of separation of variables, we can derive the solution to the heat equation. If we set \(f(x)=asin(\pi x)\), the solution is \(u(x,t)=asin(\pi x)e^{-\pi^{2}t}\), where \(a\in[1,2]\). The corresponding energy is \(E(u)=\int_{0}^{2}\frac{1}{2}|u_{x}|^{2}dx\approx\Delta x(\sum_{i=1}^{n}\frac{ 1}{2}|u_{x}(x_{i})|^{2})\). With different parameters \(a\), the above equation describes a kind of PDE. The input data samples can be generated as \((a,x,\mathcal{G}(a)(x))\), where \(\mathcal{G}(a)(x)=a\sin(\pi x)\) for specific \(a\) and \(x\). When generating the initial data samples, we choose 50 points from \([0,2)\) uniformly for x and 50 random values of \(a\) from \([1,2]\). The time step when updating the parameters in the neural network is \(2.5\times 10^{4}\). The number of iteration steps is 400. We compared the different solutions with 4 different \(a\), 1.0, 1.5, 1.8, 2.5 every 100 steps. Although \(a=2.5\) is out of the range of training
data, it still performs well in this model. With the exact solution, we also get the error with different \(a\) as Table 1. The error is defined by \(\frac{1}{N_{x}}\sum_{k=1}^{N_{x}}(u(x_{k})-\hat{u}(x_{k}))^{2}\), where \(N_{x}=51\), \(u\) is the solution obtained by EDE-DeepONet and \(\hat{u}\) is the exact solution. To illustrate the relationship between the modified energy and the original energy, we compare \(r^{2}\) and \(E\) at each step as Figure 2. Both energy are actually disspative in the EDE-DeepONet except when restart strategy applied. The restart strategy is used to keep \(r^{2}\) approaching \(E\). The modified energy is initialized when the restart strategy applied. The restart strategy was triggered on the 370th step since the modified energy and the original energy are offset. After that, they are on the same trajectory again. It is clear that the modified energy approaches the original energy before and after the restart strategy applied. In Figure 3, we give the comparison between the exact solution and the solution obtained by EDE-DeepONet. From this simple heat equation, we show that EDE-DeepONet correctly predicts the solution of the PDE. The most important fact is that EDE-DeepONet can not only predict the solution in the training subset range but also the solution out of the training range. For instance, we take \(a=2.5\) while \(a\in[1,2]\) in the training process. EDE-DeepONet shows good accuracy compared to the exact solution as Figure 3 (a)-(d) and Table 1.
\[u_{t}=cu_{xx} \tag{46}\] \[u(x,0)=sin(\pi x)\] (47) \[u(0,t)=u(2,t)=0 \tag{48}\]
This PDE is more complex than the PDE in Example 1 since the parameter is inside the equation. The traditional numerical scheme needs to be run multiple times to deal with the case with different parameters because they are actually different equations. However, we only need to train the EDE-DeepONet once. The training range of \(c\) is chosen as \([1,2)\). We choose 50 points of \(x\) and \(c\) in the same way as example 1. First, we compared the modified energy with the original energy as Figure 4. The energy is not the same as the first example since the energy depends on the parameter \(c\). We compute the average of the energy with different \(c\) to represent the energy of the system. This case is more complex than the first one, so it needs more restarts during the training. Even though the modified energy oscillates when restart strategy used, it keeps decreasing after each restart. Second, we give the error between the solution obtained by the EDE-DeepONet and the reference solution in Table 2, where the reference solution can be obtained explicitly by variable separation method and the error is defined in the same way as example 1. Third, we give the comparison between our solution and the reference solution in Figure 5. Same as example 1, we give the predicted solution of \(c\notin[1,2]\). All of them show the good accuracy. Hence, EDE-DeepONet can actually solve parametric PDEs.
### Example 3: Allen-Cahn equations
The energy in Examples 1 and 2 is quadratic and the right-hand side of the PDE is linear with respect to \(u\). We are going to show the result for the PDE with more complicated energy. The Allen-Cahn equation is a kind of reaction-diffusion equation. It is derived to describe the process of the phase separation. It was developed to solve a problem in the material science area and has been used to represent the moving interfaces in a phase-field model
Figure 3: The heat equation: The solution with 4 different initial conditions \(f(x)=a\sin(\pi x)\). The curve represents the solution obtained by the EDE-DeepONet, and xxx represents the reference solution. The training parameter \(a\) is in the range of \([1,2)\), so we give three examples in this range. We also present the case out of the range. It also shows accuracy in Figure 3-(d).
in fluid dynamics. The Allen-Cahn equation can be treated as a gradient flow in \(L^{2}\) with some specific energy. We discussed the 1D case and 2D case as follows:
#### 6.3.1 1D case
(a) Various initial conditions:
We start with the simple case, 1D Allen-Cahn equation. It can be described by the following equations:
\[u_{t}=u_{xx}-g(x) \tag{49}\] \[u(x,0)=a\sin\pi x\] (50) \[u(-1,t)=u(1,t)=0 \tag{51}\]
The corresponding Ginzburg-Landau free energy \(E[u]=\int_{0}^{1}\frac{1}{2}|u_{x}|^{2}dx+\int_{x=0}^{x=1}G(u)dx\), where \(G(u)=\frac{1}{4e^{2}}(u^{2}-1)^{2}\) and \(g(u)=G^{\prime}(u)=\frac{1}{e^{2}}u(u^{2}-1)\), \(\epsilon=0.1\). The parameter \(\epsilon\) affects the width of the jump when arriving at the steady state as the Figure 7 (c), (j), (o) and (t). In the EDE-DeepONet, we set \(\Delta t=10^{-4}\), the number of spatial points \(N_{x}\) is 51 and the range of \(a\) is \([0.1,0.5]\). We also compared the modified energy and the original energy as Figure 6. The modified energy can approximate well to the original energy even in a much more complicated form. Then, we compared 4 different solutions with different \(a\in[0.1,0.5]\) obtained by the EDE-DeepONet and the reference solution obtained by the SAV method in traditional numerical computation as Figure 7. The error is shown in Table 3, where error is
\begin{table}
\begin{tabular}{||c|c c c c||} \hline Error & \(T=0.025\) & \(T=0.05\) & \(T=0.075\) & \(T=0.1\) \\ \hline \hline \(c=1.2\) & \(1.30\times 10^{-5}\) & \(1.43\times 10^{-5}\) & \(1.35\times 10^{-5}\) & \(1.20\times 10^{-5}\) \\ \hline \(c=1.5\) & \(1.35\times 10^{-5}\) & \(1.27\times 10^{-5}\) & \(9.80\times 10^{-6}\) & \(7.80\times 10^{-6}\) \\ \hline \(c=1.8\) & \(1.17\times 10^{-5}\) & \(1.03\times 10^{-5}\) & \(7.88\times 10^{-5}\) & \(1.83\times 10^{-5}\) \\ \hline \(c=2.5\) & \(2.20\times 10^{-4}\) & \(1.34\times 10^{-4}\) & \(6.02\times 10^{-5}\) & \(7.08\times 10^{-6}\) \\ \hline \end{tabular}
\end{table}
Table 2: The parametric heat equation: The initial condition of the PDE is \(f(x)=\sin\left(\pi x\right)\). The error is defined by \(\frac{1}{N_{x}}\sum_{k=1}^{N_{x}}(u(x_{0})-\hat{u}(x_{k}))^{2}\), where \(N_{x}=51\), \(u\) is the solution obtained by EDE-DeepONet and \(\hat{u}\) is the exact solution.
Figure 4: The parametric heat equation: The modified energy and original energy when training the network. Each iteration step represents one forward step of the PDE’s numerical solution with \(\Delta t=2.5\times 10^{-4}\). This kind of PDEs is more complicated, so it need more restarts in the training process. The original energy keeps decreasing and the modified energy also shows good approximation of the original energy.
defined in the same way as example 1. \(a=0.6\notin[0.1,0.5)\) shows that EDE-DeepONet can predict the solution well out of the training range. We compared the solution with 4 different initial condition parameter \(a\) every 100 steps until the final time \(T=0.04\) as Figure 7. Each row presents the solution under the same initial condition but with different evolution time \(T\). With this example, it shows that EDE-DeepONet can deal with the PDE with a jump, while it is hard for other neural networks.
(b) Various thickness of the interface:
Heuristically, \(\epsilon\) represents the thickness of the interface in the phase separation process. We are able to obtain a sharp interface when \(\epsilon\to 0\) with evolving in time. Each theoretical and numerical analysis of the limit makes a difference in the purpose of the understanding of the equation, cf. e.g. [34; 35]. We take \(\epsilon\) as a training parameter. The problem can be described as:
\[u_{t}=u_{xx}-\frac{1}{\epsilon^{2}}(u^{3}-u) \tag{52}\] \[u(-1,t)=u(1,t)=0 \tag{53}\]
Figure 5: The parametric heat equation: The solution with 4 different parameters \(c\). The curve represents the solution obtained by the EDE-DeepONet and xxx represents the reference solution. The training parameter \(c\) is in the range of \([1,2)\), so we give 3 examples in this range. We also present the case out of the range in Figure 5-(d).
Since the training sample contains the parameter \(\epsilon\), we can not use the same initial condition as the last example. We use spectral methods for a few steps with initial condition \(u(x,0)=0.4\sin{(\pi x)}\). The training sample is generated based on the numerical solution of \(u_{\epsilon}(x,0.02)\). We randomly select \(50\) different \(\epsilon\) from \([0.1,0.2]\). We set the learning rate as \(\Delta t=10^{-4}\) and apply the adaptive time stepping strategy. We obtain the predicted solution after \(400\) iterations with different \(\epsilon\). The rest setting is the same as the last example. The solution with different \(\epsilon\) is shown in Figure 8. As \(\epsilon\) goes smaller, the interface is sharper. Besides, the range of the training parameter is \((0.1,0.2)\). We are also able to obtain the solution out of the above range. EDE-DeepONet can track the limit of \(\epsilon\) in only one training process, EDE-DeepONet can track the limit of \(\epsilon\) in only one training process, whereas other traditional numerical methods hardly make it.
#### 6.3.2 2D case
The 2D case Allen-Cahn equation is even more complex. The problem can be described as follows:
\[u_{t}=\Delta u-g(u) \tag{54}\] \[u(x,y,0)=asin(\pi x)sin(\pi y)\] (55) \[u(-1,y,t)=u(1,y,t)=u(x,-1,t)=u(x,1,t)=0 \tag{56}\]
The corresponding Ginzburg-Landau free energy \(E[u]=\int_{-1}^{1}\int_{-1}^{1}\frac{1}{2}(|u_{x}|^{2}+|u_{y}|^{2})dxdy+\int_ {-1}^{1}\int_{-1}^{1}G(u)dx\), where \(G(u)=\frac{1}{4e^{2}}(u^{2}-1)^{2}\) and \(g(u)=G^{\prime}(u)=\frac{1}{e}u(u^{2}-1)\). Usually, we take \(\epsilon=0.1\). In the training process, we take \(\Delta t=2\times 10^{-4}\). The number of spatial points is \(51\times 51\) and the number of training parameters \(a\) is \(20\). The way to choose \(a\in(0.1,0.4)\) and \(x\) is the same as in example 1. We first compared the exact solution and the solution obtained by EDE-DeepONet with initial condition \(f(x,y)=0.2sin(\pi x)sin(\pi y)\), where the exact solution is obtained by the traditional SAV method. EDE-DeepONet predicts the solution correctly based on Table 4 and Figure 9. Then in order to show its accuracy, we draw Figure 10 with more parameters. All the examples show good trends to separate. The case \(a=0.4\) is out of the training range, but it still approaches the exact solution.
Figure 6: 1D Allen-Cahn equation: The modified energy and original energy when training the network are shown above. Each iteration step represents one forward step of the PDE’s numerical solution with \(\Delta t=10^{-4}\). The modified energy shows the same trends as the original energy.
## 7 Concluding Remarks
In this paper, we provide a new neural network architecture to solve parametric PDEs with different initial conditions, while maintaining the energy dissipative of dynamic systems. We first introduce the energy dissipative law of dynamic systems to the DeepONet. We also introduce an adaptive time stepping strategy and restart strategy. With our experiments, both above strategies help keep the modified energy approaching the original energy. To avoid much cost of training the DeepONet, we evolve the neural network based on Euler methods. In this article, we adopt the SAV method to solve gradient flow problems. With this successful attempt, more work could be done. For example, we can consider a general Wasserstein gradient flow problem. We are only adopting the basic architecture of the DeepONet. The more advanced architecture is compatible to our work. It may further improve the accuracy of EDE-DeepONet.
## Acknowledgments
SJ and SZ gratefully acknowledge the support of NSF DMS-1720442 and AFOSR FA9550-20-1-0309. GL and ZZ gratefully acknowledge the support of the National Science Foundation (DMS-1555072, DMS-2053746, and DMS
\begin{table}
\begin{tabular}{||c|c c c||} \hline Error & \(T=0.01\) & \(T=0.02\) & \(T=0.03\) \\ \hline \hline \(a=0.15\) & \(1.23\times 10^{-4}\) & \(6.53\times 10^{-4}\) & \(2.75\times 10^{-3}\) \\ \hline \(a=0.2\) & \(2.24\times 10^{-4}\) & \(1.10\times 10^{-3}\) & \(4.04\times 10^{-3}\) \\ \hline \(a=0.3\) & \(4.28\times 10^{-4}\) & \(1.84\times 10^{-3}\) & \(5.76\times 10^{-3}\) \\ \hline \(a=0.35\) & \(5.31\times 10^{-4}\) & \(2.17\times 10^{-3}\) & \(6.25\times 10^{-3}\) \\ \hline \(a=0.4\) & \(6.94\times 10^{-4}\) & \(2.71\times 10^{-3}\) & \(7.22\times 10^{-3}\) \\ \hline \end{tabular}
\end{table}
Table 4: 2D Allen-Cahn equation: The initial condition of the 2D Allen-Cahn equation is \(f(x,y)=a\sin\left(\pi x\right)\sin\left(\pi y\right)\). The error is defined by \(\frac{1}{N_{x}N_{y}}\sum_{k=1}^{N_{x}}\sum_{j=1}^{N_{y}}\sum_{j=1}^{N_{y}}(u(x _{k},y_{j})-\hat{u}(x_{k},y_{j}))^{2}\), where \(N_{x}=N_{y}=51\), \(u\) is the solution obtained by EDE-DeepONet and \(\hat{u}\) is the reference solution.
Figure 7: 1d Allen-Cahn equation: The solution for 1d Allen-Cahn equation with 4 different initial conditions \(f(x)=a\sin\pi x\). The curve represents the solution obtained by our model, and xxx represents the reference solution. We draw the figure for every 100 steps. The range of \(a\) is \([0.1,0.5]\). We also compare the solution with \(a\notin[0.1,0.5]\). All the figures show the trends of the phase separation.
2134209), Brookhaven National Laboratory Subcontract 382247, and U.S. Department of Energy (DOE) Office of Science Advanced Scientific Computing Research program DE-SC0021142 and DE-SC0023161.
Figure 8: 1d Allen-Cahn equation: Solutions with different thickness of the interface at the same final time. The curve represents the solution obtained by EDE-DeepONet. xxx represents the reference solution.
Figure 10: 2D Allen-Cahn equation: The solution of 2D Allen-Cahn equation with 4 different initial conditions \(f(x,y)=a\sin\pi x\sin\pi y\). The training parameter \(a\in[0.1,0.4]\). We draw three figures where \(a\) is in the training range and one figure where \(a\) is out of the training range. All the figures show the phase separation trends according to the reference solution. As \(a\) is further away from the training range, the error tends to be larger.
Figure 9: 2D Allen-Cahn equation: (a)-(d) represents the reference solution of the 2D Allen-Cahn equation with initial condition \(f(x,y)=0.3\sin\left(\pi x\right)\sin\left(\pi y\right)\). (e)-(h) is the solution obtained by the EDE-DeepONet. |
2302.09323 | Heterogeneous Graph Convolutional Neural Network via Hodge-Laplacian for
Brain Functional Data | This study proposes a novel heterogeneous graph convolutional neural network
(HGCNN) to handle complex brain fMRI data at regional and across-region levels.
We introduce a generic formulation of spectral filters on heterogeneous graphs
by introducing the $k-th$ Hodge-Laplacian (HL) operator. In particular, we
propose Laguerre polynomial approximations of HL spectral filters and prove
that their spatial localization on graphs is related to the polynomial order.
Furthermore, based on the bijection property of boundary operators on simplex
graphs, we introduce a generic topological graph pooling (TGPool) method that
can be used at any dimensional simplices. This study designs HL-node, HL-edge,
and HL-HGCNN neural networks to learn signal representation at a graph node,
edge levels, and both, respectively. Our experiments employ fMRI from the
Adolescent Brain Cognitive Development (ABCD; n=7693) to predict general
intelligence. Our results demonstrate the advantage of the HL-edge network over
the HL-node network when functional brain connectivity is considered as
features. The HL-HGCNN outperforms the state-of-the-art graph neural networks
(GNNs) approaches, such as GAT, BrainGNN, dGCN, BrainNetCNN, and Hypergraph NN.
The functional connectivity features learned from the HL-HGCNN are meaningful
in interpreting neural circuits related to general intelligence. | Jinghan Huang, Moo K. Chung, Anqi Qiu | 2023-02-18T12:58:50Z | http://arxiv.org/abs/2302.09323v1 | # Heterogeneous Graph Convolutional Neural Network via Hodge-Laplacian for Brain Functional Data
###### Abstract
This study proposes a novel heterogeneous graph convolutional neural network (HGCNN) to handle complex brain fMRI data at regional and across-region levels. We introduce a generic formulation of spectral filters on heterogeneous graphs by introducing the \(k-th\) Hodge-Laplacian (HL) operator. In particular, we propose Laguerre polynomial approximations of HL spectral filters and prove that their spatial localization on graphs is related to the polynomial order. Furthermore, based on the bijection property of boundary operators on simplex graphs, we introduce a generic topological graph pooling (TGPool) method that can be used at any dimensional simplices. This study designs HL-node, HL-edge, and HL-HGCNN neural networks to learn signal representation at a graph node, edge levels, and both, respectively. Our experiments employ fMRI from the Adolescent Brain Cognitive Development (ABCD; n=7693) to predict general intelligence. Our results demonstrate the advantage of the HL-edge network over the HL-node network when functional brain connectivity is considered as features. The HL-HGCNN outperforms the state-of-the-art graph neural networks (GNNs) approaches, such as GAT, BrainGNN, dGCN, BrainNetCNN, and Hypergraph NN. The functional connectivity features learned from the HL-HGCNN are meaningful in interpreting neural circuits related to general intelligence.
## 1 Introduction
Functional magnetic resonance imaging (fMRI) is one of the non-invasive imaging techniques to measure blood oxygen level dependency (BOLD) signals [8]. The fluctuation of fMRI time series signals can characterize brain activity. The synchronization of fMRI time series describes the functional connectivity among brain regions for understanding brain functional organization.
There has been a growing interest in using graph neural network (GNN) to learn the features of fMRI time series and functional connectivity that are relevant to cognition or mental disorders [17, 23].
GNN often considers a brain functional network as a binary undirected graph, where nodes are brain regions, and edges denote which two brain regions are functionally connected. Functional time series, functional connectivity, or graph metrics (i.e., degree, strength, clustering coefficients, participation, etc.) are defined as a multi-dimensional signal at each node. A substantial body of research implements an convolutional operator over nodes of a graph in the spatial domain, where the convolutional operator computes the fMRI feature of each node via aggregating the features from its neighborhood nodes [23, 17]. Various forms of GNN with spatial graph convolution are implemented via 1) introducing an attention mechanism to graph convolution by specifying different weights to different nodes in a neighborhood (GAT, [9]); 2) introducing a clustering-based embedding method over all the nodes and pooling the graph based on the importance of nodes (BrainGNN, [17]); 3) designing an edge-weight-aware message passing mechanism [3]; 4) training dynamic brain functional networks based on updated nodes' features (dGCN, [23]). BrainGNN and dGCN achieve superior performance on Autism Spectrum Disorder (ASD) [17] and attention deficit hyperactivity disorder (ADHD) classification [23]. Graph convolution has also been solved in the spectral domain via the graph Laplacian [2]. For the sake of computational efficiency when graphs are large, the Chebyshev polynomials and other polynomials were introduced to approximate spectral filters for GNN [4, 10]. For large graphs, the spectral graph convolution with a polynomial approximation is computationally efficient and spatially localized [10].
Despite the success of the GNN techniques on cognitive prediction and disease classification [17, 23], the graph convolution aggregates brain functional features only over nodes and updates features for each node of the graph. Nevertheless, signal transfer from one brain region to another is through their connection, which can, to some extent, be characterized by their functional connectivity. The strength of the connectivity determines which edges signals pass through. Therefore, there is a need for heterogeneous graphs with different types of information attached to nodes, such as functional time series and node efficiency, and edges, such as functional connectivity and path length.
Lately, a few studies have focused on smoothing signals through the topological connection of edges [13, 12]. Kawahara et al. [15] proposed BrainNetCNN to aggregate brain functional connectivities among edges. However, brain functional connectivity matrices at each layer are no longer symmetric as the construction nature of the brain functional network. Jo et al. [13] employed a dual graph with the switch of nodes and edges of an original graph so that the GNN approaches described above can be applied (Hypergraph NN). But, the dual graph normally increases the dimensionality of a graph. To overcome this, Jo et al. [13] only considered important edges. Similarly, Jiang et al. [12] introduced convolution with edge-node switching that embeds both nodes and edges to a latent feature space. When graphs are not sparse, the computation of this approach can be intensive. The above-mentioned edge-node switching based model achieved great success on social network and molecular science [13, 12], suggesting that GNN approaches on graph edges have advantages when information is defined
on graph edges. Thus, it is crucial to consider heterogeneous graphs where multiple types of features are defined on nodes, edges, and etc. This is particularly suitable for brain functional data.
This study develops a novel heterogeneous graph convolutional neural network (HGCNN) simultaneously learning both nodes' and edges' functional features from fMRI data for predicting cognition or mental disorders. The HGCNN is designed to learn 1) nodes' features from their neighborhood nodes' features based on the topological connections of the nodes; 2) edges' features from their neighborhood edges' features based on the topological connections of the edges. To achieve these goals, the HGCNN considers a brain functional network as a simplex graph that allows characterizing node-node, node-edge, edge-edge, and higher-order topology. We develop a generic convolution framework by introducing the Hodge-Laplacian (HL) operator on the simplex graph and designing HL-spectral graph filters to aggregate features among nodes or edges based on their topological connections. In particular, this study takes advantage of spectral graph filters in [4, 10] and approximates HL-spectral graph filters using polynomials for spatial locations of these filters. We shall call our HGCNN as HL-HGCNN in the rest of the paper. Unlike the GNNs described above [12, 23], this study also introduces a simple graph pooling approach based on its topology such that the HL can be automatically updated for the convolution in successive layers, and the spatial dimension of the graph is reduced. Hence, the HL-HGCNN learns spectral filters along nodes, edges, or higher-dimensional simplex to extract brain functional features.
We illustrate the use of the HL-HGCNN on fMRI time series and functional connectivity to predict general intelligence based on a large-scale adolescent cohort study (Adolescent Brain Cognitive Development (ABCD), n=7693). We also compare the HL-HGCNN with the state-of-art GNN techniques described above and demonstrate the outstanding performance of the HL-HGCNN. Hence, this study proposes the following novel techniques:
1. a generic graph convolution framework to smooth signals across nodes, edges, or higher-dimensional simplex;
2. spectral filters on nodes, edges, or higher-dimensional simplex via the HL operator;
3. HL-spectral filters with a spatial localization property via polynomial approximations;
4. a spatial pooling operator based on graph topology.
## 2 Methods
This study designs a heterogeneous graph convolutional neural network via the Hodge-Laplacian operator (HL-HGCNN) that can learn the representation of brain functional features at a node-level and an edge-level based on the graph topology. In the following, we will first introduce a generic graph convolution framework to design spectral filters on nodes and edges to learn node-level and edge-level brain functional representation based on its topology achieved via the HL operator. We will introduce the polynomial approximation of the HL spectral
filters to overcome challenges on spatial localization. Finally, we will define an efficient pooling operation based on the graph topology for the graph reduction and update of the HL operator.
### Learning Node-Level and Edge-Level Representation via the Hodge-Laplacian Operator
In this study, the brain functional network is characterized by a heterogeneous graph, \(G=\{V,E\}\) with brain regions as nodes, \(V=\{v_{i}\}_{i=1}^{n}\), and their connections as edges, \(E=\{e_{ij}\}_{i,j=1,2,\cdots,n}\), as well as functional time series defined on the nodes and functional connectivity defined on the edges. This study aims to design convolutional operations for learning the representation of functional time series at nodes and the representation of functional connectivity at edges based on node-node and edge-edge connections (or the topology of graph \(G\)).
Mathematically, nodes and edges are called \(0-\) and \(1-\)dimensional simplex. The topology of \(G\) can be characterized by _boundary operator_\(\boldsymbol{\partial}_{k}\). \(\boldsymbol{\partial}_{1}\) encodes how two \(0\)-dimensional simplices, or nodes, are connecting to form a \(1\)-dimensional simplex (an edge) [6]. In the graph theory [16], \(\boldsymbol{\partial}_{1}\) can be represented as a traditional incidence matrix with size \(n\times n(n-1)/2\), where nodes are indexed over rows and edges are indexed over columns. Similarly, the second order boundary operator \(\boldsymbol{\partial}_{2}\) encodes how \(1\)-dimensional simplex, or edges, are connected to form the connections among \(3\) nodes (\(2\)-dimensional simplex or triangle).
The goal of spectral filters is to learn the node-level representation of fMRI features from neighborhood nodes' fMRI features and the edge-level representation of fMRI features from neighborhood edges' fMRI features. The neighborhood information of nodes and edges can be well characterized by the _boundary operators_\(\boldsymbol{\partial}_{k}\) of graph \(G\). It is natural to incorporate the _boundary operators_ of graph \(G\) in the \(k\)-th Hodge-Laplacian (HL) operator defined as
\[\boldsymbol{\mathcal{L}}_{k}=\boldsymbol{\partial}_{k+1}\boldsymbol{\partial }_{k+1}^{\top}+\boldsymbol{\partial}_{k}^{\top}\boldsymbol{\partial}_{k}. \tag{1}\]
When \(k=0\), the \(0\)-th HL operator is
\[\boldsymbol{\mathcal{L}}_{0}=\boldsymbol{\partial}_{1}\boldsymbol{\partial}_{1} ^{\top} \tag{2}\]
over nodes. This special case is equivalent to the standard Graph Laplacian operator, \(\boldsymbol{\mathcal{L}}_{0}=\Delta\). When \(k=1\), the \(1\)-st HL operator is defined over edges as
\[\boldsymbol{\mathcal{L}}_{1}=\boldsymbol{\partial}_{2}\boldsymbol{\partial}_{2 }^{\top}+\boldsymbol{\partial}_{1}^{\top}\boldsymbol{\partial}_{1}. \tag{3}\]
We can obtain orthonormal bases \(\boldsymbol{\psi}_{k}^{0},\boldsymbol{\psi}_{k}^{1},\boldsymbol{\psi}_{k}^{2},\cdots\) by solving eigensystem \(\boldsymbol{\mathcal{L}}_{k}\boldsymbol{\psi}_{k}^{j}=\lambda_{k}^{j} \boldsymbol{\psi}_{k}^{j}\). We now consider an HL spectral filter \(h\) with spectrum \(h(\lambda_{k})\) as
\[h(\cdot,\cdot)=\sum_{j=0}^{\infty}h(\lambda_{k}^{j})\psi_{k}^{j}(\cdot)\psi_{ k}^{j}(\cdot). \tag{4}\]
A generic form of spectral filtering of a signal \(f\) on the heterogeneous graph \(G\) can be defined as
\[f^{\prime}(\cdot)=h*f(\cdot)=\sum_{j=0}^{\infty}h(\lambda_{k}^{j})c_{k}^{j}\psi_{ k}^{j}(\cdot)\, \tag{5}\]
where \(f(\cdot)=\sum_{j=0}^{\infty}c_{k}^{j}\psi_{k}^{j}(\cdot)\). When \(k=0\), \(f\) is defined on the nodes of graph \(G\). Eq. (5) indicates the convolution of a signal \(f\) defined on \(V\) with a filter \(h\).
Likewise, when \(k=1\), \(f\) is defined on the edges of graph \(G\). Eq. (5) then indicates the convolution of a signal \(f\) defined on \(E\) with a filter \(h\). Eq. (5) is generic that can be applied to smoothing signals defined on higher-dimensional simplices. Nevertheless, this study considers the heterogeneous graph only with signals defined on nodes and edges (0- and 1-dimensional simplices). In the following, we shall denote these two as "HL-node filtering" and "HL-edge filtering", respectively.
### Laguerre Polynomial Approximation of the HL Spectral Filters
The shape of spectral filters \(h\) in Eq. (5) determines how many nodes or edges are aggregated in the filtering process. Our goal of the HL-HGCNN is to design \(h\) such as the representation at nodes and edges are learned through their neighborhood. This is challenging in the spectral domain since it requires \(h(\lambda)\) with a broad spectrum. In this study, we propose to approximate the filter spectrum \(h(\lambda_{k})\) in Eq. (5) as the expansion of Laguerre polynomials, \(T_{p}\), \(p=0,1,2,\ldots,P-1\), such that
\[h(\lambda_{k})=\sum_{p=0}^{P-1}\theta_{p}T_{p}(\lambda_{k})\, \tag{6}\]
where \(\theta_{p}\) is the \(p^{th}\) expansion coefficient associated with the \(p^{th}\) Laguerre polynomial. \(T_{p}\) can be computed from the recurrence relation of \(T_{p+1}(\lambda_{k})=\frac{(2p+1-\lambda_{k})T_{p}(\lambda_{k})-pT_{p-1}( \lambda_{k})}{p+1}\) with \(T_{0}(\lambda_{k})=1\) and \(T_{1}(\lambda_{k})=1-\lambda_{k}\).
We can rewrite the convolution in Eq. (5) as
\[f^{\prime}(\cdot)=h*f(\cdot)=\sum_{p=0}^{P-1}\theta_{p}T_{p}(\mathbf{ \mathcal{L}}_{k})f(\cdot). \tag{7}\]
Analog to the spatial localization property of the polynomial approximation of the graph Laplacian (the 0-th HL) spectral filters [10, 4, 21], the Laguerre polynomial approximation of the 1-st HL spectral filters can also achieve this localization property. Assume two edges, \(e_{ij}\) and \(e_{mn}\), on graph \(G\). The shortest distance between \(e_{ij}\) and \(e_{mn}\) is denoted by \(d_{G}(ij,mn)\) and computed as the minimum number of edges on the path connecting \(e_{ij}\) and \(e_{mn}\). Hence, \((\mathbf{\mathcal{L}}_{1}^{P})_{e_{ij},e_{mn}}=0\quad if\quad d_{G}( ij,mn)>P\), where \(\mathbf{\mathcal{L}}_{1}^{P}\) denotes the \(P\)-th power of the 1-st HL. Hence, the spectral filter represented by the \(P\)-th order Laguerre polynomials of the 1-st HL is localized within the \(P\)-hop edge neighborhood.
Therefore, spectral filters in Eq. (6) have the property of spatial localization. This proof can be extended to the \(k\)-th HL spectral filters. In Section of Results 3, we will demonstrate this property using simulation data.
### Topological Graph Pooling (TGPool)
The pooling operation has demonstrated its effectiveness on grid-like image data [22]. However, spatial graph pooling is not straightforward, especially for heterogeneous graphs. This study introduces a generic topological graph pooling (TGPool) approach that includes coarsening of the graph, pooling of signals, and an update of the Hodge-Laplacian operator. For this, we take an advantage of the one-to-one correspondence between the _boundary operators_ and graph \(G\) and define the three operations for pooling based on the _boundary operators_. As the _boundary operators_ encode the topology of the graph, our graph pooling is topologically based.
For graph coarsening, we generalize the Graclus multilevel clustering algorithm [5] to coarsen the \(k-\)dimensional simplices on graph \(G\). We first cluster similar \(k-\)dimensional simplices based on their associated features via local normalized cut. At each coarsening level, two neighboring \(k-\)dimensional simplices with maximum local normalized cut are matched until all \(k-\)dimensional simplices are explored [19]. A balanced binary tree is generated where each \(k-\)dimensional simplex has either one (i.e., singleton) or two child \(k-\)dimensional simplices. Fake \(k-\)dimensional simplices are added to pair with those singletons. The weights of \(k+1-\)dimensional simplices involving fake \(k-\)dimensional simplices are set as \(0\). The pooling on this binary tree can be efficiently implemented as a simple \(1\)-dimensional pooling of size \(2\). Then, two matched \(k-\)dimensional simplices are merged as a new \(k-\)dimensional simplex by removing the \(k-\)dimensional simplex with the lower degree and the \(k+1-\)dimensional simplices that are connected to this \(k-\)dimensional simplex. To coarsen the graph, we define a new _boundary operator_ by deleting the corresponding rows and columns in the boundary operator and computing the HL operators via Eq.2. Finally, the signal of the new \(k-\)dimensional simplex is defined as the average (or max) of the signals at the two \(k-\)dimensional simplices. Fig.1 illustrates the graph pooling of \(0\)-dimensional and \(1\)-dimensional simplices and the boundary operators of the updated graph after pooling.
Figure 1: Topological Graph Pooling (TGPool). Panels (a) and (b) illustrate the topological graph pooling of (a) \(0\)-dimensional (nodes) and (b) \(1\)-dimensional (edges) simplices. The color at each node or edge indicates features and their similarity across nodes or edges.
### Hodge-Laplacian Heterogeneous Graph Convolutional Neural Network (HL-HGCNN)
We design the HL-HGCNN with the temporal, node, and edge convolutional layers to learn temporal and spatial information of brain functional time series and functional connectivity. Each layer includes the convolution, leaky rectified linear unit (leaky ReLU), and pooling operations. Fig.2 illustrates the overall architecture of the HL-HGCNN model, the temporal, node, and edge convolutional layers.
**Filters.** Denote \(h_{t}\), \(h_{v}\), \(h_{e}\) to be temporal filters, HL-node filters, HL-edge filters, respectively. \(h_{t}\) is a simple 1-dimensional filter along the time domain with different kernel sizes to extract the information of brain functional time series at multiple temporal scales. \(h_{v}\) and \(h_{e}\) are defined in Eq. (6), where \(\theta_{p}\) are the parameters to be estimated in the HL-HGCNN. As mentioned earlier, \(P\) determines the kernel size of \(h_{v}\) and \(h_{e}\) and extracts the higher-order information of the brain functional time series and functional connectivity at multiple spatial scales.
**Leaky ReLU.** This study employs leaky rectified linear unit (ReLU) as an activation function, \(\sigma\), since negative functional time series and functional connectivity are considered biologically meaningful.
**Pooling.** In the temporal convolutional layer, traditional 1-dimensional max pooling operations are applied in the temporal dimension of the functional time series. In the edge and node convolutional layers, TGPool is applied to reduce the dimension of the graph and the dimension of the node and edge signals.
**Output Layer.** We use one more graph convolutional layer to translate the feature of each node or edge into a scalar. Then, we concatenate the vectorized node and edge representations as the input of the output layer. In this study, the output layers contain fully-connected layers.
Figure 2: HL-HGCNN architecture. Panel (A) illustrates the overall architecture of the HL-HGCNN model. Panels (B-D) respectively show the architectures of the HL-edge, temporal, and HL-node convolutional layers.
### Implementation
\(\mathbf{\mathcal{L}}_{0}\) **and \(\mathbf{\mathcal{L}}_{1}\).** Given a brain functional connectivity matrix, we first build a binary matrix while the element in the connectivity matrix with its absolute value greater than a threshold is assigned as one, otherwise zero. We compute the boundary operator \(\mathbf{\partial}_{1}\) with the size of the number of brain regions and the number of functional connectivities. The \(i\)-th row of \(\mathbf{\partial}_{1}\) encodes the functional connection of the \(i\)-th vertex and the \(j\)-th column of \(\mathbf{\partial}_{1}\) encodes how two vertices are connecting to form an edge [6, 7]. Hence, \(\mathbf{\mathcal{L}}_{0}=\mathbf{\partial}_{1}\mathbf{\partial}_{1}^{\top}\).
According to Eq. (1), the computation of \(\mathbf{\mathcal{L}}_{1}\) involves the computation of \(\mathbf{\partial}_{2}\) that characterizes the interaction of edges and triangles. The brain functional connectivity matrix does not form a triangle simplex so the second order boundary operator \(\mathbf{\partial}_{2}=0\). Hence, \(\mathbf{\mathcal{L}}_{1}=\mathbf{\partial}_{1}^{\top}\mathbf{\partial}_{1}\).
**Optimization.** We implement the framework in Python 3.9.13, Pytorch 1.12.1 and PyTorch Geometric 2.1.0 library. The HL-HGCNN is composed of two temporal, node, and edge convolution layers with \(\{8,8\}\), \(\{16,1\}\), and \(\{32,32\}\) filters, respectively. The order of Laguerre polynomials for the 0-th and 1-st HL approximation is set to 3 and 4, respectively. The output layer contains three fully connected layers with 256, 128 and 1 hidden nodes, respectively. Dropout with 0.5 rate is applied to every layer and Leaky ReLU with a leak rate of 0.33 are used in all layers. These model-relevant parameters are determined using greedy search. The HL-HGCNN model is trained using an NVIDIA Tesla V100SXM2 GPU with 32GB RAM by the ADAM optimizer with a mini-batch size of 32. The initial learning rate is set as 0.005 and decays by 0.95 after every epoch. The weight decay parameter was 0.005.
### ABCD Dataset
This study uses resting-state fMRI (rs-fMRI) images from the ABCD study that is an open-sourced and ongoing study on youth between 9-11 years old ([https://abcdstudy.org/](https://abcdstudy.org/)). This study uses the same dataset of 7693 subjects and fMRI preprocessing pipeline stated in Huang et al. [11]. A node represents one of 268 brain regions of interest (ROIs) [18] with its averaged time series as node features. Each edge represents the functional connection between any two ROIs with the functional connectivity computed via Pearson's correlation of their averaged time series as edge features. General intelligence is defined as the average of 5 NIH Toolbox cognition scores, including Dimensional Change Card Sort, Flanker, Picture Sequence Memory, List Sorting Working Memory, and Pattern Comparison Processing Speed [1]. General intelligence ranges from 64 to 123 with mean and standard deviation of \(95.3\pm 7.3\) among 7693 subjects.
## 3 Results
This section first demonstrates the spatial localization property of HL-edge filters in relation to the order of Laguerre polynomials via simulated data. We then demonstrate the use of HL-edge filtering and its use in GNN for predicting fluid intelligence using the ABCD dataset.
### Spatial Localization of the HL-Edge Filtering via Laguerre Polynomial Approximations
We illustrate the spatial location property of the HL-edge filtering by designing a pulse signal at one edge (Fig. 3 (a)) and smoothing it via the HL-edge filter. When applying the HL-edge filter approximated via the \(1^{st}\)-, \(2^{nd}\)-, \(3^{rd}\)-, \(4^{th}\)-order Laguerre polynomials, the filtered signals shown in Fig. 3 (b-e) suggest that the spatial localization of the HL-edge filters is determined by the order of Laguerre polynomials. This phenomenon can also be achieved using multi-layer HL-edge filters where each layer contains HL-edge filters approximated using the \(1^{st}\)-order Laguerre polynomial (see Fig. 3 (f)).
### HL-node vs. HL-edge filters
We aim to examine the advantage of the HL-edge filters over the HL-node filters when fMRI data by nature characterize edge information, such as the functional connectivity. When functional connectivities are defined at a node, they form a vector of the functional connectivities related to this node. In contrast, by nature, the functional connectivity represents the functional connection strength of two brain regions (i.e., edge). Hence, it is a scalar defined at an edge. We design the HL-node network with the two HL-node convolutional layers (see in Fig. 2D) and the output layer with three fully connected layers. Likewise, the HL-edge network with the two HL-edge convolutional layers (see Fig. 2B) and the output layer with three fully connected layers. We employ five-fold cross-validation six times to evaluate the prediction accuracy between predicted and actual general intelligence based on root mean square error (RMSE). Table 1 shows that the HL-edge network has smaller RMSE and performs better than the HL-node network (\(p=1.51\times 10^{-5}\) ). This suggests the advantage of the HL-edge filters when features by nature characterize the weights of edges.
Figure 3: Spatial localization of the HL-edge filtering. Panel (a) shows the simulated signal only occurring at one edge. Panels (b-e) show the signals filtered using the HL-edge filters with the \(1^{st}\)-, \(2^{nd}\)-, \(3^{rd}\)-, \(4^{th}\)-order Laguerre polynomial approximation, respectively. Panel (f) illustrates the signals generated from the HL-edge convolution networks with 4 layers. Each layer consists of the HL-filter approximated using the \(1^{st}\)-order Laguerre polynomial.
### Comparisons with existing GNN methods
We now compare our models with the existing state-of-art methods stated above in terms of the prediction accuracy of general intelligence using the ABCD dataset. The first experiment is designed to compare the performance of the HL-node network with that GAT [9], BrainGNN [17], and dGCN [23]. We adopt the architecture of BrainGNN and dGCN from Li et al. [17] and [23] as both methods were used for fMRI data. The GAT is designed with two graph convolution layers, each consisting of 32 filters and 2-head attention, which is determined via greedy search as implemented in our model. The functional connectivity vector of each region is used as input features. Table 1 suggested that the HL-node network performs better than the GAT (\(p=0.0468\)) and BrainGNN (\(p=0.0195\)), and performs equivalently with dGCN (\(p=0.0618\)).
The second experiment compares the HL-edge network with BrainNetCNN [15] and Hypergraph NN [13]. The Hypergraph NN comprises two graph convolution layers with 32 filters and one hypercluster layer after the first graph convolution layer. The BrainNetCNN architecture follows the design in [15]. Table 1 shows that the HL-edge network has smaller RMSE and performs better than the BrainNetCNN (\(p=4.49\times 10^{-5}\)) and Hypergraph NN (\(p=0.0269\)).
Finally, our HL-HGCNN integrates heterogeneous types of fMRI data at nodes and edges. Table 1 shows that the HL-HGCNN performs the best compared to all the above methods (all \(p<0.03\)).
\begin{table}
\begin{tabular}{c|c c c} \hline \hline & **GNN model** & **RMSE** & \(p\)**-value** \\ \hline \multirow{4}{*}{GNN with node filtering} & **HL-Node network (ours)** & \(7.134\pm 0.011\) & \(4.01\times 10^{-6}\) \\ & **GAT**[9] & \(7.165\pm 0.020\) & \(1.91\times 10^{-5}\) \\ & **BrainGNN[17]** & \(7.144\pm 0.013\) & \(1.51\times 10^{-6}\) \\ & **dGCN[17]** & \(7.151\pm 0.012\) & \(9.83\times 10^{-6}\) \\ \hline \multirow{4}{*}{GNN with edge filtering} & **HL-Edge network (ours)** & \(7.009\pm 0.012\) & \(2.48\times 10^{-2}\) \\ & **BrainNetCNN[15]** & \(7.118\pm 0.016\) & \(5.34\times 10^{-6}\) \\ \cline{1-1} & **Hypergraph NN[13]** & \(7.051\pm 0.022\) & \(3.74\times 10^{-5}\) \\ \hline GNN with node and edge filtering & **HL-HGCNN (ours)** & **6.972\(\pm\)0.015** & - \\ \hline \hline \end{tabular}
\end{table}
Table 1: General intelligence prediction accuracy based on root mean square error (RMSE). \(p\)-value is obtained from two-sample \(t\)-tests examining the performance of each method in reference to the proposed HL-HGCNN.
Figure 4: The saliency map of the brain functional connectivity. Red boxes highlight brain networks with higher weights, indicating greater contributions to the prediction of general intelligence.
### Interpretation
We use the graph representation of the final edge convolution layer of the HL-HGCNN to compute the saliency map at the connectivity level. The group-level saliency map is computed by averaging the saliency maps across all the subjects in the dataset. The red boxes in Fig.4 highlight the functional connectivities of the occipital regions with the prefrontal, parietal, salience, and temporal regions that most contribute to general intelligence. Moreover, our salience map also highlights the functional connectivities of the right prefrontal regions with bilateral parietal regions, which is largely consistent with existing findings on neural activities in the frontal and parietal regions [14, 20].
## 4 Conclusion
This study proposes a novel HL-HGCNN on fMRI time series and functional connectivity for predicting cognitive ability. Our experiments demonstrate the spatial localization property of HL spectral filters approximated via Laguerre polynomials. Moreover, our HL-node, HL-edge, and HL-HGCNN perform better than the existing state-of-art methods for predicting general intelligence, indicating the potential of our method for future prediction and diagnosis based on fMRI. Nevertheless, more experiments on different datasets are needed to further validate the robustness of the proposed model. Our method provides a generic framework that allows learning heterogeneous graph representation on any dimensional simplices, which can be extended to complex graph data. The HL-HGCNN model offers an opportunity to build high-order functional interaction among multiple brain regions, which is our future research direction.
**Acknowledgements.** This research/project is supported by the Singapore Ministry of Education (Academic research fund Tier 1) and A*STAR (H22P0M0007). Additional funding is provided by the National Science Foundation MDS-2010778, National Institute of Health R01 EB022856, EB02875. This research was also supported by the A*STAR Computational Resource Centre through the use of its high-performance computing facilities.
|
2306.16938 | Restore Translation Using Equivariant Neural Networks | Invariance to spatial transformations such as translations and rotations is a
desirable property and a basic design principle for classification neural
networks. However, the commonly used convolutional neural networks (CNNs) are
actually very sensitive to even small translations. There exist vast works to
achieve exact or approximate transformation invariance by designing
transformation-invariant models or assessing the transformations. These works
usually make changes to the standard CNNs and harm the performance on standard
datasets. In this paper, rather than modifying the classifier, we propose a
pre-classifier restorer to recover translated (or even rotated) inputs to the
original ones which will be fed into any classifier for the same dataset. The
restorer is based on a theoretical result which gives a sufficient and
necessary condition for an affine operator to be translational equivariant on a
tensor space. | Yihan Wang, Lijia Yu, Xiao-Shan Gao | 2023-06-29T13:34:35Z | http://arxiv.org/abs/2306.16938v1 | # Restore Translation Using Equivariant Neural Networks
###### Abstract
Invariance to spatial transformations such as translations and rotations is a desirable property and a basic design principle for classification neural networks. However, the commonly used convolutional neural networks (CNNs) are actually very sensitive to even small translations. There exist vast works to achieve exact or approximate transformation invariance by designing transformation-invariant models or assessing the transformations. These works usually make changes to the standard CNNs and harm the performance on standard datasets. In this paper, rather than modifying the classifier, we propose a pre-classifier restorer to recover translated (or even rotated) inputs to the original ones which will be fed into any classifier for the same dataset. The restorer is based on a theoretical result which gives a sufficient and necessary condition for an affine operator to be translational equivariant on a tensor space.
## 1 Introduction
Deep convolutional neural networks (CNNs) had outperformed humans in many computer vision tasks [9, 12]. One of the key ideas in designing the CNNs is that the convolution layer is equivariant with respect to translations, which was emphasized both in the earlier work [5] and the modern CNN [12]. However, the commonly used components, such as pooling [7] and dropout [19, 20], which help the network to extract features and generalize, actually make CNNs not equivariant to even small translations, as pointed out in [1, 3]. As a comprehensive evaluation, Figure 1 shows that two classification CNNs suffer the accuracy reductions of more than \(11\%\) and \(59\%\) respectively on CIFAR-10 and MNIST, when the inputs are horizontally and vertically translated at most 3 pixels.
Invariance to spatial transformations, including translations, rotations and scaling, is a desirable property for classification neural networks and the past few decades have witnessed thriving explorations on this topic. In general, there exist three ways to achieve exact or approximate invariance. The first is to design transformation-invariant neural network structures [2, 6, 8, 10, 15, 16, 18, 21]. The second is to assess and approximate transformations via a learnable module [4, 11] and then use the approximation to reduce the transformed inputs to "standard" ones. The third is data augmentation [1, 3, 17] by adding various transformations of the samples in the original dataset.
Those ad-hoc architectures to achieve invariance often bring extra parameters but harm the network performance on standard datasets. Moreover, the various designs with different purposes are not compatible with each other. Data augmentation is not a scalable method since the invariance that benefits from a certain augmentation protocol does not generalize to other transformations [1]. Including learnable modules such as the Spatial Transformer, all the three approaches require training the classifier from scratch and fail to endow existing trained networks with some invariance. It was indicated in [1] that "the problem of insuring invariance to small image transformations in neural networks while preserving high accuracy remains unsolved."
In this paper, rather than designing any in-classifier component to make the classifier invariant to some transformation, we propose a pre-classifier restorer to restore translated or rotated inputs to the original ones. The invariance is achieved by feeding the restored inputs into any following classifier. Our restorer depends only on the dataset instead the classifier. Namely, the training processes of the restore and classifier are separate and a restore is universal to any classifier trained on the same dataset.
We split the whole restoration into two stages, transformation estimation and inverse transformation, see Figure 2. In the first stage, we expect that standard inputs lead to standard outputs and the outputs of translated inputs reflect the translations. Naturally, what we need is a strictly translation-equivariant neural network. In Section 3, we investigate at the theoretical aspect the sufficient and necessary condition to construct a strictly equivariant affine operator on a tensor space. The condition results in _the circular filters_, see Definition 3.5, as the fundamental module to a strictly translation-equivariant neural network. We give the canonical architecture of translation-equivariant networks, see Equation (2). In Section 4, details of the restorer are presented. We define a translation estimator, the core component of a restored, as a strictly translation-equivariant neural network that guarantees the first component of every output on a dataset to be the largest component, see Definition 4.1. For a translated input, due to the strict equivariance, the largest component of the output reflect the translation. Thus we can translate it inversely in the second stage and obtain the original image. Though the restorer is independent on the following classifier, it indeed depends on the dataset. Given a dataset satisfying some reasonable conditions, i.e. _an aperiodic finite dataset_, see Definition 4.2, we prove the existence of a translation estimator, i.e. a restored, with the canonical architecture for this dataset. Moreover, rotations can be viewed as translations by converting the Cartesian coordinates to polar coordinates and the rotation restorer arises in the similar way.
In Section 5, the experiments on MNIST, 3D-MNIST and CIFAR-10 show that our restorers not only visually restore the translated inputs but also largely eliminate the accuracy reduction phenomenon.
## 2 Related works
As generalization of convolutional neural networks, group-equivariant convolutional neural networks [2, 6] exploited symmetries to endow networks with invariance to some group actions, such as the combination of translations and rotations by certain angles. The warped convolutions [10] converted some other spatial transformations into translations and thus obtain equivariance to these spatial transformations. Scale-invariance [21, 8, 15]
Figure 1: The accuracy reduction after vertical and horizontal translations. The translation scope is [-3, 3] pixels. Left: LeNet-5 on MNIST; Right: VGG-16 on CIFAR-10.
was injected into networks by some ad-hoc components. Random transformations [16] of features maps were introduced in order to prevent the dependencies of network outputs on specific poses of inputs. Similarly, probabilistic max pooling [18] of the hidden units over the set of transformations improved the invariance of networks in unsupervised learning. Moreover, local covariant feature detecting methods [14, 22] were proposed to address the problem of extracting viewpoint invariant features from images.
Another approach to achieve invariance is "shiftable" down-sampling [13], in which any original pixel can be linearly interpolated from the pixels on the sampling grid. These "shiftable" down-sampling exists if and only if the sampling frequency is at least twice the highest frequency of the unsampled signal.
The Spatial Transformer [4, 11], as a learnable module, produces a predictive transformation for each input image and then spatially transforms the input to a canonical pose to simplify the inference in the subsequent layers. Our restorers give input-specific transformations as well and adjust the input to alleviate the poor invariance of the following classifiers. Although the Spatial Transformers and our restorer are both learnable modules, the training of the former depend not only on data but also on the subsequent layers, while the latter are independent of the subsequent classifiers.
## 3 Equivariant neural networks
Though objects in nature have continuous properties, once captured and converted to digital signals, there properties are represented by real tensors. In this section, we study the equivariance of operators on a tensor space.
### Equivariance in tensor space
Assume that a map \(\tilde{x}:\mathbb{R}^{d}\to\mathbb{D}\) stands for a property of some \(d\)-dimensional object where \(\mathbb{D}\subseteq\mathbb{R}\). Sampling \(\tilde{x}\) over a \((n_{1},n_{2},\cdots,n_{d})\)-grid results in a tensor \(x\) in a tensor space
\[\mathcal{H}\coloneqq\mathbb{D}^{n_{1}}\otimes\mathbb{D}^{n_{2}}\otimes\cdots \otimes\mathbb{D}^{n_{d}}. \tag{1}\]
We denote \([n]=[0,1,\ldots,n-1]\) for \(n\in\mathbb{Z}_{+}\) and assume \(k\operatorname{\text{mod}}n\in[n]\) for \(k\in\mathbb{Z}\). For an index \(I=(i_{1},i_{2},\cdots,i_{d})\in\prod_{i=1}^{d}[n_{i}]\) and \(x\in\mathcal{H}\), denote \(x[I]\) to be the element of \(x\) with subscript \((i_{1},i_{2},\cdots,i_{d})\). For convenience, we extend the index of \(\mathcal{H}\) to \(I=(i_{1},i_{2},\cdots,i_{d})\in\mathbb{Z}^{d}\) by defining
\[x[I]=x[i_{1}\operatorname{\text{mod}}n_{1},\cdots,i_{d}\operatorname{\text{ mod}}n_{d}].\]
**Definition 3.1** (Translation).: _A translation \(T^{M}:\mathcal{H}\to\mathcal{H}\) with \(M\in\mathbb{Z}^{d}\) is an invertible linear operator such that for all \(I\in\mathbb{Z}^{d}\) and \(x\in\mathcal{H}\),_
\[T^{M}(x)[I]=x[I-M].\]
_The inverse of \(T^{M}\) is clearly \(T^{-M}\)._
**Definition 3.2** (Equivariance).: _A map \(w:\mathcal{H}\to\mathcal{H}\) is called equivariant with respect to translations if for all \(x\in\mathcal{H}\) and \(M\in\mathbb{Z}^{d}\),_
\[T^{M}(w(x))=w(T^{M}(x)).\]
**Definition 3.3** (Vectorization).: _A tensor \(x\) can be vectorized to \(X\in\overrightarrow{\mathcal{H}}=\mathbb{D}^{N}\) with \(N=n_{1}n_{2}\cdots n_{d}\) such that_
\[X(\delta(I))\coloneqq x[I],\]
_where \(\delta(I)\coloneqq(i_{1}\operatorname{\text{mod}}n_{1})n_{2}n_{3}\cdots n_{d }+(i_{2}\operatorname{\text{mod}}n_{2})n_{3}n_{4}\cdots n_{d}+\cdots+(i_{d} \operatorname{\text{mod}}n_{d})\), and we denote \(X=\overrightarrow{x}\). Moreover, the translation \(T^{M}\) is vectorized as \(T^{M}(X)\coloneqq\overrightarrow{T^{M}(x)}\)._
### Equivariant operators
When \(\mathbb{D}=\mathbb{R}\), the tensor space \(\mathcal{H}\) is a Hilbert space by defining the inner product as \(x\cdot z\coloneqq\overrightarrow{x}\cdot\overrightarrow{z}\) which is the inner product in vector space \(\overrightarrow{\mathcal{H}}\). In the rest of this section, we assume \(\mathbb{D}=\mathbb{R}\).
According to the Reize's representation theorem, there is a bijection between continuous linear operator space and tensor space. That is, a continuous linear operator \(v:\mathcal{H}\to\mathbb{R}\) can be viewed as a tensor \(v\in\mathcal{H}\) satisfying \(v(x)=v\cdot x\). Now we can translate \(v\) by \(T^{M}\) and obtain \(T^{M}(v):\mathcal{H}\to\mathbb{R}\) such that \(T^{M}(v)(x)=T^{M}(v)\cdot x\).
We consider a continuous linear operator \(w:\mathcal{H}\to\mathcal{H}\). For \(I\in\mathbb{Z}^{d}\) and \(x\in\mathcal{H}\), denote \(w_{I}(x)=w(x)[I]\). Then \(w_{I}:\mathcal{H}\to\mathbb{R}\) is a continuous linear operator. An _affine operator_\(\alpha:\mathcal{H}\to\mathcal{H}\) differs from a continuous linear operator \(w\) by a _bias tensor_\(c\) such that \(\alpha(x)=w(x)+c\) for all \(x\in\mathcal{H}\).
**Theorem 3.4**.: _Let \(\alpha(x)=w(x)+c:\mathcal{H}\to\mathcal{H}\) be an affine operator. Then, \(\alpha\) is equivariant with respect to translations if and only if for all \(M\in\mathbb{Z}^{d}\),_
\[w_{M}=T^{M}(w_{\mathbf{0}})\text{ and }c\propto\mathbf{1},\]
_where \(\mathbf{0}\) is the zero vector in \(\mathbb{Z}^{d}\) and \(c\propto\mathbf{1}\) means that \(c\) is a constant tensor, that is, all of its entries are the same._
Proof of Theorem 3.4 is given in Appendix A. Recall that \(\overrightarrow{\mathcal{H}}=\mathbb{R}^{N}\) is the vectorization of \(\mathcal{H}\) and \(T^{M}\) also translates vectors in \(\overrightarrow{H}\). Each continuous linear operator on \(\mathcal{H}\) corresponds to a matrix in \(\mathbb{R}^{N\times N}\) and each bias operator corresponds to a vector in \(\mathbb{R}^{N}\). Now we consider the translation equivariance in vector space.
**Definition 3.5** (Circular filter).: _Let \(W=(W_{0},W_{1},\cdots,W_{N-1})^{T}\) be a matrix in \(\mathbb{R}^{N\times N}\). \(W\) is called a circular filter if \(W_{\delta(M)}=T^{M}(W_{0})\) for all \(M\in\mathbb{Z}^{d}\)._
As the vector version of Theorem 3.4, we have
**Corollary 3.6**.: _Let \(A:\mathbb{R}^{N}\to\mathbb{R}^{N}\) be an affine transformation such that_
\[A(X)=W\cdot X+C,\]
_in which \(W\in\mathbb{R}^{N\times N}\), \(C\in\mathbb{R}^{N}\). Then, \(A\) is equivariant with respect to translations in the sense that for all \(M\in\mathbb{Z}^{d}\)_
\[A(T^{M}(X))=T^{M}(A(X))\]
_if and only if \(W\) is a circular filter and \(C\propto\mathbf{1}\)._
This affine transformation is very similar to the commonly used convolutional layers [12, 5] in terms of shared parameters and the alike convolutional operation. But the strict equivariance calls for the same in-size and out-size, and circular convolutions, which are usually violated by CNNs.
### Equivariant neural networks
To compose a strictly translation-equivariant network, the spatial sizes of the input and output in each layer must be the same and thus down-samplings are not allowed. Though Corollary 3.6 provides the fundamental component of a strictly translation-equivariant network, different compositions of this component lead to various equivariant networks. Here we give the _canonical architecture_. We construct the strictly translation-equivariant network \(F\) with \(L\) layers as
\[F(X)=F_{L}\circ F_{L-1}\circ\cdots\circ F_{1}(X). \tag{2}\]
The \(l\)-the layer \(F_{l}\) has \(n_{l}\) channels and for an input \(X\in\mathbb{R}^{n_{l-1}\times N}\) we have
\[F_{l}(X)=\sigma(W[l]\cdot X+C[l])\in\mathbb{R}^{n_{l}\times N}, \tag{3}\]
where
\[W[l] =(W^{1}[l],\cdots,W^{n_{l}}[l])\in\mathbb{R}^{n_{l}\times n_{l-1} \times N\times N},\] \[C[l] =(C^{1}[l]\cdot\mathbf{1},\cdots,C^{n_{l}}[l]\cdot\mathbf{1}),\] \[W^{k}[l] =(W^{k,1}[l],\cdots,W^{k,n_{l-1}}[l])\in\mathbb{R}^{n_{l-1}\times N \times N},\] \[C^{k}[l] =(C^{k,1}[l],\cdots,C^{k,n_{l-1}}[l])\in\mathbb{R}^{n_{l-1}},\]
\(\sigma\) is the activation, \(W^{k,r}[l]\in\mathbb{R}^{N\times N}\) are circular filters, \(C^{k,r}[l]\in\mathbb{R}\) are constant biases for \(k=1,\cdots,n_{l}\) and \(r=1,\cdots,n_{l-1}\), the \(\cdot\) denotes the inner product and \(\mathbf{1}\) is the vector whose all components are 1.
## 4 Translation restorer
### Method
In Section 3.3, we propose a strictly equivariant neural network architecture (2) such that any translation on the input will be reflected on the output. Generally speaking, once the outputs of an equivariant network on a dataset have some spatial structure, this structure shifts consistently as the input shifts. Thus, the translation parameter of a shifted input can be solved from its output. Finally, we can restore the input via the inverse translation. Figure 2 shows how a restorer works.
The whole restoration process splits into two stages, translation estimation and inverse translation. We first define the translation estimator which outputs a consistent and special structure on a dataset.
**Definition 4.1**.: _Let \(\mathcal{D}\subset\mathbb{D}^{P\times N}\) be a dataset with \(P\) channels. Then a translation-equivariant network_
\[F:\mathbb{R}^{P\times N}\rightarrow\mathbb{R}^{N}\]
_is said to be a translation estimator for \(\mathcal{D}\) if_
\[F(X)[0]=\text{max}_{i=0}^{N-1}F(X)[i],\]
_where \(F(X)[i]\) is the \(i\)-th component of \(F(X)\)._
Figure 2: The pre-classifier translation restorer. For a shifted data \(T^{M}(x)\) as the input, the translation estimator obtains the translation \(M\) and restore the original data \(T^{-M}(T^{M}(x))=x\), which is feed into a pre-trained classifier.
Given such a translation estimator for dataset \(\mathcal{D}\) and a shifted input \(X^{\prime}=T^{M}(X)\) for some \(X\in\mathcal{D}\), we propagate \(X^{\prime}\) through \(F\) and get the output \(F(X^{\prime})\in\mathbb{R}^{N}\). Since the first component of \(F(X)\) is the largest, the location of the largest component of \(F(X^{\prime})\) is exactly the translation parameter:
\[\delta(M)=\text{argmax}_{i=0}^{N-1}F_{i}(X^{\prime}).\]
Then we can restore \(X=T^{-M}(X^{\prime})\) by inverse translation. The restored inputs can be feed to any classifier trained on the dataset \(\mathcal{D}\).
### Existence of the restorer
In this section, we show the existence of restorers, that is, the translation estimator. Note that our restorer is independent of the following classifier but dependent on the dataset. For translation, if a dataset contains both an image and a translated version of it, the estimator must be confused. We introduce aperiodic datasets to clarify such cases.
**Definition 4.2** (Aperiodic dataset).: _Let \(\mathcal{D}\subset\mathbb{D}^{P\times N}\) be a finite dataset with \(P\) channels. We call \(\mathcal{D}\) an aperiodic dataset if \(\mathbf{0}\notin\mathcal{D}\) and_
\[T^{M}(X)=X^{\prime}\iff M=\mathbf{0}\text{ and }X=X^{\prime},\]
_for \(M\in\mathbb{Z}^{d+1}\) and \(X,X^{\prime}\in\mathcal{D}\). Here \(d\) is the spatial dimension and \(M\) decides the translation in the channel dimension in addition._
Let \(\mathcal{D}\) be an aperiodic dataset. Given that \(\mathbb{D}=[2^{Q+1}]\) which is the case in image classification, we prove the existence of the translation estimator for such an aperiodic dataset. The proof consists of two steps. The data are first mapped to their binary decompositions through a translation-equivariant network as Equation (2) and then the existence of the translation-restorer in the form of Equation (2) is proved for binary data.
Let \(\mathbb{D}=[2^{Q+1}]\) and \(\mathbb{B}=\{0,1\}\). We denote \(\eta:\mathbb{D}\rightarrow\mathbb{B}^{Q}\) to be the binary decomposition, such as \(\eta(2)=(0,1,0)\) and \(\eta(3)=(1,0,1)\). We perform the binary decomposition on \(X\in\mathbb{D}^{P\times N}\) element-wisely and obtain \(\eta(X)\in\mathbb{B}^{G\times N}\), where \(G=PQ\) is the number of channels in binary representation. A dataset \(\mathcal{D}\subseteq\mathbb{D}^{P\times N}\) can be decomposed into \(\mathcal{B}\subset\mathbb{B}^{G\times N}\). Note that the dataset \(\mathcal{D}\) is aperiodic if and only if its binary decomposition \(\mathcal{B}\) is aperiodic.
The following Lemma 4.3 demonstrates the existence of a translation-equivariant network which coincides with the binary decomposition \(\eta\) on \([2^{Q+1}]^{P\times N}\). Proof details are placed in Appendix B.
**Lemma 4.3**.: _Let \(\eta:[2^{Q+1}]\rightarrow\mathbb{B}\) be the binary decomposition. There exists a \((2Q+2)\)-layer network \(F\) in the form of Equation (2) with ReLU activations and width at most \((Q+1)N\) such that for \(X\in[2^{Q+1}]^{P\times N}\)_
\[F(X)=\eta(X).\]
The following Lemma 4.4 demonstrate the existence of a 2-layer translation restorer for an aperiodic binary dataset. Proof details are placed in Appendix C.
**Lemma 4.4**.: _Let \(\mathcal{B}=\{Z_{s}|s=1,2,\cdots,S\}\subset\mathbb{B}^{G\times N}\) be an aperiodic binary dataset. Then there exists a 2-layer network \(F\) in the form of Equation (2) with ReLU activations and width at most \(SN\) such that for all \(s=1,2,\cdots,S\),_
\[F(Z_{s})[0]=\text{max}_{i=0}^{N-1}F(Z_{s})[i].\]
Given a \((2Q+2)\)-layer network \(F^{\prime}\) obtained from Lemma 4.3 and a 2-layer network \(F^{\prime\prime}\) obtained from Lemma 4.4, we stack them and have \(F=F^{\prime\prime}\circ F^{\prime}\) which is exactly a translation restorer. We thus have proved the following theorem.
**Theorem 4.5**.: _Let \(\mathcal{D}=\{X_{s}|s=1,2,\cdots,S\}\subset[2^{Q+1}]^{P\times N}\) be an aperiodic dataset. Then there exists a network \(F:\mathbb{R}^{P\times N}\rightarrow\mathbb{R}^{N}\) in the form of Equation (2) with ReLU activations such that for \(s=1,2,\cdots,S\),_
\[F(X_{s})[0]=\text{max}_{i=0}^{N-1}F(X_{s})[i],\]
_of which the depth is at most \(2Q+4\) and the width is at most \(\max(SN,(Q+1)N)\). Namely, this network is a translation restorer for \(\mathcal{D}\)._
## 5 Experiments
The core component of the restorer is the translation estimator which outputs the translation parameter of the shifted inputs.
We use the architecture described in Equation (2) with \(L=6\), \(n_{l}=1\) for \(l=1,\cdots,L\) and ReLU activations. The training procedure aims at maximizing the first component of the outputs. Thus the max component of the output indicates the input shift. The experimental settings are given in Appendix D. We report four sets of experiments below.
Figure 3: The restoreers for MNIST and CIFAR-10.
### Translation restoration
We first focus on the performance of translation restoration. Experiments are conducted on MNIST, CIFAR-10, and 3D-MNIST.
2D Images.We train translation restorers for MNIST and CIFAR-10. MNIST images are resized to 32x32 and CIFAR-10 images are padded 4 blank pixels at the edges.
In Figure 3, the left column is the original images, the middle column is the randomly shifted images and the right column is the restored images. On both datasets, images are randomly shifted vertically and horizontally at most \(\frac{1}{4}\) of its size. The shift is a circular shift where pixels shifted out of the figure appear on the other end. We can see that the shifted images are disorganized but the restored images are very alike the original images.
To evaluate the restoration performance of pretrained restorers, we train classifiers and test them on randomly shifted images and restored ones and the results are given in Table 1. When images are not shifted, the restorers lead to only \(0.3\%\) and \(0.03\%\) accuracy reduction on two datasets. Nevertheless, even if the translation scope is 1, restorers improve the accuracy. Moreover, no matter how the images are shifted, the restorer can repair them to the same status and result in the same classification accuracy, namely \(98.58\%\) and \(88.18\%\), while the accuracies drop significantly without the restorer, and the larger the range of translation, the more obvious the restoration effect
Different ArchitecturesOur proposed restorer is an independent module that can be placed before any classifier. It is scalable to different architectures the subsequent classifier uses.
In Table 2, we evaluate the restoration performance on popular architectures including VGG-16, ResNet-18, DenseNet-121, and MobileNet v2. Translated images (w/Trans.) are randomly shifted within scope 4 in both vertical and horizontal directions. The reduction of accuracy on original images is no more than \(0.04\%\) and the restorer improves the accuracy on shifted images by \(1.66\%\sim 6.02\%\).
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & Res.\textbackslash\)Trans. & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline \multirow{3}{*}{MNIST} & w/o & 98.89 & 98.21 & 95.41 & 87.07 & 76.61 & 62.9 & 51.33 & 41.1 & 35.7 \\ & w/ & 98.59 & 98.59 & 98.59 & 98.59 & 98.59 & 98.59 & 98.59 & 98.59 & 98.59 \\ & Effect & -0.3 & +0.38 & +3.18 & +11.52 & +21.98 & +35.69 & +47.26 & +57.49 & +62.89 \\ \hline \multirow{3}{*}{CIFAR-10} & w/o & 88.21 & 86.58 & 85.9 & 83.65 & 82.16 & 80.46 & 79.37 & 77.71 & 76.01 \\ & w/ & 88.18 & 88.18 & 88.18 & 88.18 & 88.18 & 88.18 & 88.18 & 88.18 & 88.18 \\ \cline{1-1} & Effect & -0.03 & +1.6 & +2.28 & +4.53 & +6.02 & +7.72 & +8.81 & +10.47 & +12.17 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Restoration performance on MNIST and CIFAR-10. Images are randomly shifted within the translation scope ranging from 0 to 8 in both vertical and horizontal directions. We use LeNet-5 on MNIST and ResNet-18 on CIFAR-10. ”Res.” and ”Trans.” stand for restorer and translation respectively.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{VGG-16} & \multicolumn{2}{c}{ResNet-18} & \multicolumn{2}{c}{DenseNet-121} & \multicolumn{2}{c}{MobileNet v2} \\ Res.\textbackslash\)Trans. & w/o & w/ & w/o & w/ & w/o & w/ & w/o & w/ \\ \hline w/o & 89.27 & 83.40 & 88.21 & 82.16 & 92.14 & 90.46 & 88.10 & 83.36 \\ w/ & 89.23 & 89.23 & 88.18 & 88.18 & 92.12 & 92.12 & 88.09 & 88.09 \\ Effect & -0.04 & +5.83 & -0.03 & +6.02 & -0.02 & +1.66 & -0.01 & +4.73 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Restoration performance on different architectures and CIFAR-10.
Translation Augmentation.Training with translation augmentation is another approach to improving the translational invariance of a model. However, translation augmentation is limited in a certain scope and thus cannot ensure the effectiveness for test images shifted out of the scope.
In Figure 4, we compare the restoration performance on models not trained with translation augmentation (dash lines) and models trained with translation augmentation (solid lines). The augmentation scope is \(10\%\) of the image size, that is, 3 pixels for MNIST and 4 pixels for CIFAR-10. Translation augmentation indeed improves the translational invariance of the classifier on images shifted in the augmentation scope. However, when the shift is beyond the augmentation scope, the accuracy begins to degrade. In such a case, the pre-classifier restorer is still able to calibrate the shift and improve the accuracy of the classifier trained with augmentation.
3D Voxelization Images.3D-MNIST contains 3D point clouds generated from images of MNIST. The voxelization of the point clouds contains grayscale 3D tensors.
Figure 5 visualizes the restoration on 3D-MNIST. In the middle of each subfigure, the 3-dimensional digit is shifted in a fixed direction. This fixed direction is detected by the translation estimator and the restored digit is shown on the right.
Figure 4: Restoration performance on classifiers trained with or without translation augmentations. The models are LeNet-5 for MNIST and VGG-16 for CIFAR-10. ”res.” and ”aug” stand for restorer and augmentation, respectively.
Figure 5: The restored on 3D-MNIST. In each sub-figure, the left is the original digit, the middle is shifted digit, and the right is the restored digit.
### Rotation restoration
Rotation can be regarded as a kind of translation. The Euclidean space \(\mathbb{R}^{d+1}\) can be characterized by polar coordinates
\[(\phi_{1},\cdots,\phi_{d-1},\phi_{d},r)\in[0,\pi]\times\cdots\times[0,\pi] \times[0,2\pi)\times\mathbb{R}^{+}.\]
We can sample a property map, defined in Section 3.1, \(\tilde{x}:\mathbb{R}^{d+1}\rightarrow\mathbb{R}\) over a \((n_{1},n_{2},\cdots,n_{d+1})\)-grid along the polar axes and obtain a tensor \(x\) such that for given \(R>0\) and \(0<a<1\)
\[x(I)=\tilde{x}(\frac{\pi i_{1}}{n_{1}-1},\cdots,\frac{\pi i_{d-1}}{n_{d-1}-1}, \frac{2\pi i_{d}}{n_{d}},Ra^{i_{d+1}}),\]
where \(I=(i_{1},i_{2},\cdots,i_{d+1})\in\prod_{i=1}^{d+1}[n_{i}]\). The last index \(Ra^{i_{d+1}}\) can be replaced with \(\frac{i_{d+1}R}{n_{d+1}}\). Note that the vectorization \(X\) is in \(\mathbb{D}^{n_{d+1}\times N}\) with \(N=n_{1}n_{2}\cdots n_{d}.\) Since we only care about the rotation, i.e. the translation \(T^{M}\) on the first \(d\) components of \(I\), the space \(\mathbb{D}^{n_{d+1}\times N}\) is viewed as an \(n_{d+1}\)-channel vector space. Thus, we can treat rotation as translation and leverage the method discussed above to restore rotated inputs.
Visualization.We experiment with the rotation restoration on MNIST. Since the transfer from Cartesian coordinates to polar coordinates requires high resolution, we first resize the images to \(224\times 224\) pixels. Other settings are similar to the aforementioned experiments.
Figure 6 visualizes the rotation restoration. We can tell from it that most rotated images are restored correctly, though some of them are not restored to the original images. The rotated digit 9 in the top row is more like an erect digit 6 than the original one and the restorer just leaves it alone. The reason why rotation restoration is not as perfect as translation restoration is that the dataset is not aperiodic with respect to rotations. On the one hand, some digits seem like the rotated version of other digits, such as 6 and 9. On the other hand, even in a certain digit class, images vary in digit poses. For example, a rotated digit 0 is similar to an erect one.
Note that the group-equivariant CNNs in [2, 6] can only deal with rotations by certain angles such as \(90^{\circ}\), \(180^{\circ}\), and \(270^{\circ}\). On the other hand, our approach can deal with rotations from more angles.
Figure 6: The rotation restoration on the first 24 images in the test set of MNIST. The left column of each subfigure is the original images and the right is the restored images. In the middle column of each subfigure, the images are rotated \(40^{\circ},90^{\circ}\) and \(150^{\circ}\) respectively.
Conclusion
This paper contributes to the equivalent neural networks in two aspects. Theoretically, we give the sufficient and necessary conditions for an affine operator \(Wx+b\) to be translational equivariant, that is, \(Wx+b\) is translational equivariant on a tensor space if and only if \(W\) has the high dimensional convolution structure and \(b\) is a constant tensor. It is well known that if \(W\) has the convolution structure, then \(Wx\) is equivariant to translations [5, 9], and this is one of the basic principles behind the design of CNNs. Our work gives new insights into the convolutional structure used in CNNs in that, the convolution structure is also the necessary condition and hence the most general structure for translational equivariance. Practically, we propose the translational restorer to recover the original images from the translated or rotated ones. The restorer can be combined with any classifier to alleviate the performance reduction problem for translated or rotated images. As a limitation, training a restorer on a large dataset such as the ImageNet is still computationally difficult.
|
2310.12260 | Measuring Thermal Profiles in High Explosives using Neural Networks | We present a new method for calculating the temperature profile in high
explosive (HE) material using a Convolutional Neural Network (CNN). To
train/test the CNN, we have developed a hybrid experiment/simulation method for
collecting acoustic and temperature data. We experimentally heat cylindrical
containers of HE material until detonation/deflagration, where we continuously
measure the acoustic bursts through the HE using multiple acoustic transducers
lined around the exterior container circumference. However, measuring the
temperature profile in the HE in experiment would require inserting a high
number of thermal probes, which would disrupt the heating process. Thus, we use
two thermal probes, one at the HE center and one at the wall. We then use
finite element simulation of the heating process to calculate the temperature
distribution, and correct the simulated temperatures based on the experimental
center and wall temperatures. We calculate temperature errors on the order of
15{\deg}C, which is approximately 12% of the range of temperatures in the
experiment. We also investigate how the algorithm accuracy is affected by the
number of acoustic receivers used to collect each measurement and the
resolution of the temperature prediction. This work provides a means of
assessing the safety status of HE material, which cannot be achieved using
existing temperature measurement methods. Additionally, it has implications for
range of other applications where internal temperature profile measurements
would provide critical information. These applications include detecting
chemical reactions, observing thermodynamic processes like combustion,
monitoring metal or plastic casting, determining the energy density in thermal
storage capsules, and identifying abnormal battery operation. | John Greenhall, David K. Zerkle, Eric S. Davis, Robert Broilo, Cristian Pantea | 2023-10-18T18:49:21Z | http://arxiv.org/abs/2310.12260v1 | # Measuring Thermal Profiles in High Explosives using Neural Networks
###### Abstract
We present a new method for calculating the temperature profile in high explosive (HE) material using a Convolutional Neural Network (CNN). To train/test the CNN, we have developed a hybrid experiment/simulation method for collecting acoustic and temperature data. We experimentally heat cylindrical containers of HE material until detonation/deflagration, where we continuously measure the acoustic bursts through the HE using multiple acoustic transducers lined around the exterior container circumference. However, measuring the temperature profile in the HE in experiment would require inserting a high number of thermal probes, which would disrupt the heating process. Thus, we use two thermal probes, one at the HE center and one at the wall. We then use finite element simulation of the heating process to calculate the temperature distribution, and correct the simulated temperatures based on the experimental center and wall temperatures. We calculate temperature errors on the order of 15\({}^{\circ}\)C, which is approximately 12% of the range of temperatures in the experiment. We also investigate how the algorithm accuracy is affected by the number of acoustic receivers used to collect each measurement and the resolution of the temperature prediction. This work provides a means of assessing the safety status of HE material, which cannot be achieved using existing temperature measurement methods. Additionally, it has implications for range of other applications where internal temperature profile measurements would provide critical information. These applications include detecting chemical reactions, observing thermodynamic processes like combustion, monitoring metal or plastic casting, determining the energy density in thermal storage capsules, and identifying abnormal battery operation.
## 1 Introduction
Noninvasive measurement of internal temperature distribution is critical to a range of applications, including detecting chemical reactions, observing thermodynamic processes like combustion, monitoring metal or plastic casting, determining the energy density in thermal storage capsules, identifying abnormal battery operation, and assessing the safety status of high explosives (HE). Currently, there are no good noninvasive techniques for measuring temperature distribution in a sealed container.
Classical thermometry techniques are limited to measuring the outside temperature of the container or require puncturing the container, which can interfere with the process being monitored and pose a safety hazard. Additionally, these techniques are typically limited in the number of internal locations where temperature can be measured, and the embedded instruments can interfere with the process being monitored. Alternatively, acoustic techniques have been developed to enable measuring temperature distributions at an arbitrary number of internal points without interfering with the physical process.[1, 2, 3, 4, 5] These techniques are based on acoustic Time-of-Flight (ToF) measurements using an array of acoustic transducers. One at a time, each transducer transmits an acoustic burst, which then propagates through the material to the receivers. The time required for the acoustic bursts to travel between each transmitter/receiver pair is dependent on the sound speed throughout the material, which is dependent on the temperature distribution. The temperature is measured using either a 2-step or 3-step process. In the 2-step process, the sound speed distribution is calculated directly from the measured waveforms using techniques such as full-waveform inversion[6, 7, 8] or a convolutional encoder-decoder network,[9] and then
temperature is determined from sound speed using an empirical model for the given material. In the 3-step process, the ToF is measured from the waveforms, the sound speed distribution is determined using reverse-time migration [10, 11, 12]. and then the temperature is calculated from an empirical model. However, demonstration of the existing acoustic methods is limited to measuring temperature in single-phase (gas, liquid, or solid) materials, and the techniques require the transducers to be in direct contact with the material.
In contrast, many applications require temperature distribution measurements in other materials, and they require a noninvasive measurement, i.e. transducers must measure through the container walls. In this case, some of the acoustic burst energy propagates through the internal material as a bulk wave, while the remaining energy travels through the container walls as guided waves. As a result, the guided waves interfere with the bulk waves, which inhibits implementing waveform inversion or reverse-time migration. When measuring highly attenuating materials, lower acoustic frequencies are required, which increases the burst durations and further increases the overlap between different arrivals. In previous work, bulk wave arrivals were isolated by using cross-correlation with broad-band chirps [13] or using a Convolutional Neural Network (CNN) [14]. However, to measure sound speed, and, thus temperature, these techniques still require the use of reverse-time migration, which can be highly sensitive to the initial sound speed estimate and to error in the estimated arrival time.
To overcome the limitations of existing temperature measurement techniques, we present a novel technique based on time-domain acoustic measurements processed via CNN. In contrast with traditional temperature sensors and existing acoustic methods, our technique enables measuring the temperature profile through a material, it is noninvasive, and it works for challenging, highly attenuating materials such as HE. To train and test the technique, we utilize a novel mixture of experimental and simulated data to provide acoustic and temperature profile measurements, respectively. We conduct experiments/simulations of a cylindrical container filled with HE (pentolite 50/50) as it is heated to the point of detonation or deflagration to provide a variety of thermal profiles. This technique enables measuring real-time temperature profiles in a material noninvasively, which is not possible using existing techniques. In addition to monitoring the safety status of HE, this technique could be invaluable for a wide range of other applications including, assessing capacity in thermal storage systems, quantifying performance in molten salt reactors, measuring chemical kinetics, and monitoring material composition in various industrial processes, to name a few.
## 2 Methods
### Experimental acoustic and thermocouple measurements
The goal of this work is to use a CNN to measure the temperature profile within the HE based on the acoustic bursts transmitted between an acoustic transmitter (Tx) and one or more receivers (Rx). To enable CNN training and testing, we will acquire hybrid experimental/simulation data. Figure 1 shows the experimental data collection process. A cylindrical container (Al-6061) with dimensions, 144 mm inner diameter (\(2R\)), 6.4 mm thickness, 200 mm height is equipped with 16 piezoelectric transducers (STEMINC SMD07T05R411), evenly spaced around the container circumference, and two thermocouples are inserted into the HE at the wall (\(r=R\)) and center (\(r=0\)) at approximately the same height as the acoustic transducers (Figure 1(a)). Over the course of an experiment, heaters placed at the bottom of the container gradually heat the HE until it detonates or deflagrates. During each experiment, we collect a set of acoustic (Figure 1(b)-(c)) and thermocouple measurements (Figure 1(d)) at 2 min intervals. Due to the high acoustic attenuation within the HE, must select a relatively low excitation frequency [14]. We utilize a Gaussian burst with 10 V\({}_{\text{PP}}\) amplitude, 350 kHz center frequency, 150 kHz standard deviation. Figure 1(b) shows a cross-section of the acoustic waves propagating through the HE and container from one transmitter (Tx) to the remaining 15 receivers (Rx). Figure 1(c) shows an example acoustic measurement, which consists of 15 waveforms, transmitted from one Rx and received from the remaining Rx, with lines indicating the theoretical arrival
times for the first bulk (red) and guided waves (green). At each time step in the experiment, we repeat this acoustic measurement, using each of the Tx, one at a time.
## 0.2 Simulated temperature profiles in HE
The thermocouple data provides temperature information at two locations \(r=0\) and \(R\) (Figure 1(d)), but measuring the temperature profile with a useful amount of radial resolution would require a significant number of thermocouples that would interfere with the HE heating process. Thus, to acquire temperature profiles, we employ axisymmetric numerical simulations in COMSOL, based on an existing HE modeling methodologies which account for the heat transfer, phase change, species transfer, and natural convection within the HE [15, 16]. Figure 2 shows the numerical simulation setup for the axisymmetric HE container. We utilize the built-in heat transfer module to simulate the HE and container temperatures as they are heated from approximately 20 \({}^{\circ}\)C by the heater, which is represented by ramping up and then holding the
Figure 1: Experimental data collection. (a) A container filled with HE is heated from below. (b) A cross-section shows the acoustic bulk and guided waves transmitted/received between 16 acoustic transducers. (c) Example acoustic measurement from one Tx to 15 Rx. (d) Two thermocouples measure HE temperature at the wall and center of the container over the course of the experiment.
Figure 2: Simulated HE heating. (a) An axisymmetric finite element model used to compute the temperature distribution at the transducer cross-section as a function of radial position \(r\). (b) Selected example radial temperature profiles at various experiment times. (c) Colorplot showing the temperature as a function of radial position \(r\) and time over the course of the experiment.
temperature at the heater boundary to 180 \({}^{\circ}\)C. Pentolite 50/50 consists of TNT (50%), which has a melting temperature of approximately 80 \({}^{\circ}\)C and PETN (50%), which melts at approximately 140 \({}^{\circ}\)C. Thus, when the temperature 80\({}^{\circ}\)C\(<\)T\(<\)140\({}^{\circ}\)C the TNT melts, and the embedded PETN particles begin to sink. When the temperature exceeds 140\({}^{\circ}\)C, the PETN also melts, and the two species can diffuse into one another. This results in gradients in the material concentrations, which we represent using the species transport and laminar flow mixture model modules in COMSOL.
After completing the simulation, we select a line from \(r=0\) to \(r=R\), at the same height that the acoustic transducers are mounted. Figure 2(b) shows several radial temperature profiles at various steps throughout the experiment. Figure 2(c) shows a colorplot of the temperature as a function of radial position and time throughout the experiment.
## 3 Machine learning with hybrid measurements
Prior to performing ML, we preprocess the experimental and simulated measurements as illustrated in Figure 3. Each acoustic measurement consists of an \(N_{t}\)\(\times\)\(N_{Rx}\) array of waveforms, where \(N_{Rx}\) is the number of waveforms used, and each waveform is a time series of length \(N_{t}\). We investigate the effect of the number \(N_{Rx}\) of Rx measurements used, where we select \(N_{Rx}\) to be an odd number of Rx opposing Tx. We then reduce the noise amplitude and emphasize acoustic signals that are similar to the excitation \(x_{ex}\) by cross-correlating the raw waveforms \(X_{w}\) with the excitation signal to get \(X_{cc}=X_{w}*x_{e}\), where * is the cross-correlation operator.[17] We then reduce the number of peaks and the range of feature scales within the experimental acoustic measurements, by computing the envelopes \(X_{e}\)
\[X_{e}=|H(X_{cc})(t)|, \tag{1}\]
where \(H(\cdot)(t)\) denotes the Hilbert transform. The input signal \(X\) to the CNN model is then created by normalizing the envelopes based on the standard deviation for each Rx
\[X=\frac{\sqrt{N_{t}}(X_{e}-\bar{X}_{e})}{\sqrt{\sum(X_{e}-\bar{X}_{e})^{2}}}, \tag{2}\]
where the bar \(\bar{X}_{e}\) indicates the mean value of \(X_{e}\) over time for a single Rx value.
Figure 3: Preprocessing and machine learning steps for hybrid measurements from experiment and simulation.
To preprocess the temperature profiles, we need to correct for error between the temperatures from experiment and simulation. These are typically due to differences in HE material properties due to the casting process, inconsistent input power, non-axisymmetric components, defects, or physics in the experiment, or electrical noise in the heaters or thermocouples. Figure 3(right) shows an example of experimental (solid) and simulated (dashed) temperature profiles at the wall (blue) and HE center (orange). To account for differences, we correct the simulated temperature profiles based on the experimental thermocouple temperatures at the boundaries. We first calculate parameters to shift and scale the initial uncorrected simulation temperatures \(T\)(\(r\),\(s\)) at position \(r\) and experiment step \(s\) to match the experimental temperatures at the container boundaries (\(r=0\), \(R\)). To simplify the formulae, we adopt a subscript \(r\) notation, which indicates a term that is a variable of \(r\) is being evaluated at a boundary, e.g. \(T_{r}(s)\) where \(r=0\) or \(R\) indicates the center or wall boundaries, respectively. The corrected simulated temperatures \(T_{r}(s)\) at the boundaries can be calculated as
\[T_{r}(s)=a_{r}\cdot\{T_{r}^{\prime}(c\cdot[s-d])-b_{r}\}. \tag{3}\]
Here, temperature scale \(a_{r}(s)\) and shift \(b_{r}(s)\) and time scale c(\(s\)) and shift d(\(s\)), and temperature coefficients are linearly interpolated between the boundary values at \(r=0\) and \(R\). We group the scale and shift coefficients at the boundaries into a single set \(\theta=\{a_{0},b_{0},a_{R},b_{R},c,d\}\), and the optimal \(\theta^{*}\) is computed by minimizing the mean-squared error over experiment steps \(s\) between the corrected simulated temperatures and the experimental temperatures at the boundaries,
\[\theta^{*}=\underset{\theta}{\operatorname{argmin}}\sum_{r=0,R}\left\|T_{r}- \widehat{T}_{r}\right\|^{2}. \tag{4}\]
Here, \(\widehat{T}_{r}\) is the experimental boundary temperatures. Finally, we will investigate the effect of the temperature resolution, i.e. the number \(N_{pts}\) of radial points at which the CNN estimates the temperature. To train the CNN using different values of \(N_{pns}\), we can simply interpolate the radial temperature profiles from COMSOL at \(N_{pts}\) locations in \(0\leq r\leq R\).
As a result of the hybrid experimental/simulated data process, we obtain input data \(X\) and output temperature profile data \(T\) that can be used to train and test the CNN. Figure 3(bottom) shows the CNN architecture, which consists of a series of CNN blocks, each comprising of a 2D convolution (Conv) layer, a rectified linear unit (ReLU) activation layer, and a pooling (MaxPool) layer. At each Conv block, input data (think acoustic time-series data from multiple receivers) is convolved by a series of convolutional filters (see reference for detailed formulae [18]). The intent is for the filters to identify patterns within each acoustic signal and between neighboring signals and then transform the input signals to accentuate useful signal features and suppress signal noise. Next, the convolved data is passed to the ReLU layer, which introduces nonlinearities into the model that increases learning speed and performance [19, 20]. The data passes through a MaxPool layer, which effectively returns a summary of the input data, which has been reduced in size so as to reduce the CNN model complexity and reduce the model sensitivity to slight shifts in the input data. We implement three CNN blocks, where the Conv layers consist of 8*2\({}^{(l-1)}\) filters with dimension 16\(\times\)2 for each layer \(l=1\), 2, and 3. By utilizing multiple CNN blocks in series, it is possible to reduce complex acoustic signals to one or more extracted features that represent the critical information conveyed by the acoustic signal. After the final CNN block, we then flatten the signal to a 1D array, apply a Dropout layers to ensure that the network does not rely too heavily on any one neuron during training, and then use a dense Output layer that applies a linear transformation between the flattened features and the estimated temperatures. This produces an \(N_{pts}\times\)1 output \(T\) representing temperatures at each of the radial points.
Because of small differences in the transducer geometry, material properties, positions, and adhesive layer dimensions and material properties, there are variations in the transfer functions between pairs of transducers. Our goal is to develop a temperature measurement method that is robust to these differences, as well as differences in the container and HE. Thus, we divide the data into sets, where each set consists of the measurements from a single transmitter to all receivers for all measurements in a single heating experiment. As a result, the combination of Tx/Rx transmission functions is unique between data sets. We then group the data sets randomly into 10 folds to test using k-folds cross-validation, wherein we train the model on all but one folds and test on the excluded fold, for each combination of training/testing folds.
## 3 Results
We perform the cross-validation procedure, training on all-but-one fold and estimating the temperature distributions on the remaining fold, for each combination of folds. Figure 4(a) shows some example radial temperature profiles from a single test set, i.e. the acoustic measurements from a single Tx over the course of a single heating experiment. Here, we have selected the results using \(N_{Rx}=3\) opposing receivers and a special resolution of \(N_{ps}=25\) radial points between \(r=0\)-\(R\). We plot the radial temperature distributions at several experiment steps \(s\) as the HE was heated from approximately 20 \({}^{\circ}\)C (\(s_{0}\), dark blue) until detonation/deflagration (\(s_{max}\), red), where the dashed and solid lines indicate the temperature profiles estimated by the CNN and the "true" temperature profiles simulated in COMSOL. We observe small errors between the estimated and true temperature profiles, which are likely due to differences between the axisymmetric simulated temperatures and the experimental temperatures, as well as differences in the transducer-wall coupling between Tx-Rx data sets used in training versus testing. Despite the small errors, we observe that the CNN is able to closely estimate the temperature trends, i.e. where the temperature slope is steep/flat. This is an important finding because it indicates where the solid-liquid HE transition occurs, which provides critical information about the status of the HE.
Figure 4: CNN testing results. (a) Example estimated (solid) and true (dashed) temperature profiles at several times throughout one experiment. (b) Mean temperature error vs number of radial points \(N_{ps}\) at which temperature was estimated for different numbers of receivers \(N_{Rx}\).
In addition to a qualitative comparison, evaluated the effect of the number of radial points \(N_{pts}\) and number of receivers \(N_{Rx}\) on the Root-Mean Squared Error (RMSE) between the true temperatures and those estimated by the CNN. For each we tested combinations of (\(N_{ps}\), \(N_{Rx}\)) values in the ranges \(5\leq N_{pts}\leq 50\) in steps of \(5\) and \(1\leq N_{Rx}\leq 9\) for odd numbers \(N_{Rx}\) of transducers opposing Tx. For each (\(N_{ps}\), \(N_{Rx}\)) combination, we retrain the model 10 times for each combination of training/testing folds, resulting in 10 RMSE values per (\(N_{ps}\), \(N_{Rx}\)) combination. Figure 4(b) shows the mean (lines) and standard deviation (shaded) RMSE value as a function of \(N_{ps}\) for several values of \(N_{Rx}\). We observe that RMSE decreases from \(17^{\circ}\)C to \(15^{\circ}\)C on average, when the number \(N_{Rx}\) of Rx increases from one to three. The decrease in RMSE is likely due to the fact that using measurements from additional Rx increases the amount of pertinent information provided to the CNN. The RMSE further decreases to \(14^{\circ}\)C by further increasing \(N_{Rx}=5\), for \(N_{pts}\leq 10\), but other numbers of points result in an increase in RMSE from the models using the same \(N_{ps}\) and \(N_{Rx}=1\) or \(3\). Subsequent increases to \(N_{Rx}=7\) and \(9\) were found to increase the RMSE on average to values of \(24^{\circ}\)C and \(39^{\circ}\)C, respectively. Here, increasing \(N_{Rx}\) increases the available information at the cost of increasing the number of trainable CNN parameters. CNN training consists of using a gradient-based adaptive momentum (Adam) convex optimization algorithm. In general, the training process is a non-convex optimization problem, which means that the Adam solver will find a locally-optimal combination of CNN parameters, but it may not find the combination that is globally-optimal. Increasing the number of CNN parameters increases the dimensionality of the optimization problem, which decreases the likelihood that the globally-optimal set of parameters will be found. Thus, by increasing \(N_{Rx}\) we balance the benefit of introducing additional information about the system with the increased dimensionality. For \(N_{Rx}<5\), the additional information is more beneficial than the increase in dimensionality, while for \(N_{Rx}>5\), the increase in dimensionality is more detrimental. Additionally, for \(N_{Rx}>5\), the additional information comes from transducers that are further from the opposing Rx. As a result, there is more interference between guided and bulk waves, and the amplitude of the first guided wave relatively high, as shown in Figure 1(c).
We note that the training and testing data was all measured or simulated on containers with nominally identical geometry and filled with nominally identical HE. The time required for the bursts to propagate between a Tx/Rx pair depends on the sound speed, which is temperature dependent, and the distance between the Tx/Rx. Thus, it is unlikely that the CNN, as presented in this manuscript, would be successful at estimating the temperature profile in a container with a significantly different shape or size. It may be possible to account for the size of the container by stretching/contracting the measured waveforms, but this is left for future work. Additionally, the dependence of the sound speed on temperature will differ between HE materials, which would introduce errors in the exact temperature values. However, most HE materials follow similar sound speed-temperature trends, i.e. decreasing sound speed with increasing temperature. Thus, it is likely that the CNN could measure the temperature profile trend, which could help identify if there was a solid-liquid transition (orange and yellow lines in Figure 4(a)), single phase with a temperature gradient (light blue line in Figure 4(a)), or constant temperature (red and dark blue lines in Figure 4(a)). Again, confirming and quantifying the performance of the CNN with different HE materials in training/testing is left for future work.
## 4 Conclusion
We present a novel technique for measuring internal temperature profiles by combining time-domain acoustic measurements and CNN processing. In contrast with existing temperature measurement methods our technique measures the interior temperature profile noninvasively, instead of requiring transducers to be placed inside or penetrate the container or being limited to measuring exterior surface temperatures. The technique is demonstrated on HE-filled containers as they are heated externally from ambient temperature until detonation/deflagration. Here, we introduce a hybrid measurement process, where we collect acoustic
measurements experimentally, and measure the temperature profiles via finite element simulation. We then introduce a CNN that estimates the temperature at a specified number of points within the container based on the acoustic signals from one or more acoustic receiver. We observe that the CNN accurately estimates the temperatures, and it captures the temperature trends, which can provide critical information about phase, thermal gradient, etc. in the HE. Additionally, we find that increasing the number of receivers used measure the acoustic burst has competing effects of providing additional information about the temperature profile at the cost of increasing the model complexity. We observe the lowest RMSE of 15\({}^{\circ}\)C between the true and estimated temperatures by using three opposing receivers. In this study, training and testing data consisted of experiments and simulations on containers with nominally identical dimensions and materials. In the future, we will extend the data set to a range of dimensions and HE materials. We anticipate that this will require normalizing data to account for changes in shape. Additionally, it will increase the error in the absolute temperatures measured between different HE materials but will likely still provide crucial information such as whether or not there is a liquid-solid HE interface (is the HE partially melted?).
Thus, this work presents the first demonstration of using acoustics to measure internal thermal profiles in high-attenuation materials, through the material container. This technique has implications in a variety of applications including, assessing the safety status of HE materials, monitoring metal or plastic casting, determining the energy density in thermal storage capsules, and identifying abnormal battery operation, to name a few.
This work was supported by the U.S. Department of Energy through the Los Alamos National Laboratory. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of U.S. Department of Energy (Contract No. 89233218CNA000001).
|
2308.08072 | Decentralized Graph Neural Network for Privacy-Preserving Recommendation | Building a graph neural network (GNN)-based recommender system without
violating user privacy proves challenging. Existing methods can be divided into
federated GNNs and decentralized GNNs. But both methods have undesirable
effects, i.e., low communication efficiency and privacy leakage. This paper
proposes DGREC, a novel decentralized GNN for privacy-preserving
recommendations, where users can choose to publicize their interactions. It
includes three stages, i.e., graph construction, local gradient calculation,
and global gradient passing. The first stage builds a local inner-item
hypergraph for each user and a global inter-user graph. The second stage models
user preference and calculates gradients on each local device. The third stage
designs a local differential privacy mechanism named secure gradient-sharing,
which proves strong privacy-preserving of users' private data. We conduct
extensive experiments on three public datasets to validate the consistent
superiority of our framework. | Xiaolin Zheng, Zhongyu Wang, Chaochao Chen, Jiashu Qian, Yao Yang | 2023-08-15T23:56:44Z | http://arxiv.org/abs/2308.08072v1 | # Decentralized Graph Neural Network for Privacy-Preserving Recommendation
###### Abstract.
Building a graph neural network (GNN)-based recommender system without violating user privacy proves challenging. Existing methods can be divided into federated GNNs and decentralized GNNs. But both methods have undesirable effects, i.e., low communication efficiency and privacy leakage. This paper proposes DGREC, a novel decentralized GNN for privacy-preserving recommendations, where users can choose to publicize their interactions. It includes three stages, i.e., graph construction, local gradient calculation, and global gradient passing. The first stage builds a local inner-item hypergraph for each user and a global inter-user graph. The second stage models user preference and calculates gradients on each local device. The third stage designs a local differential privacy mechanism named secure gradient-sharing, which proves strong privacy-preserving of users' private data. We conduct extensive experiments on three public datasets to validate the consistent superiority of our framework.
Recommender system, decentralized graph neural network, privacy protection +
Footnote †: journal: Information systems – Personalization; Security and privacy \(\rightarrow\) Web application security
+
Footnote †: journal: Information systems – Personalization; Security and privacy \(\rightarrow\) Web application security
media, users like to share their moments to gather friends with similar preferences. **CH2:**_protecting user privacy cost-effectively._ Existing privacy-preserving methods, e.g., secret sharing (Kang et al., 2017) and homomorphic encryption (Krishnan et al., 2017), equip heavy mathematical schemes, causing high communication costs. **CH3:**_mitigating the trade-off between effectiveness and efficiency._ In a decentralized GNN, involving one user for training cannot provide sufficient data to train an effective model, whereas involving too many users slows down the training process.
This paper proposes DGREC, a decentralized GNN for privacy-preserving recommendations to solve the above challenges. To overcome **CH1**, we propose allowing users to publicize their interactions freely. DGREC mainly has three stages, i.e., **Stage1:** graph construction, **Stage2:** local gradient calculation, and **Stage3:** global gradient passing. In **Stage1**, all users constitute an _inter-user graph_ collaboratively, and each user constructs an _inner-item hypergraph_ individually. For the inter-user graph, edges come from their friendship or proximity. For the inner-item hypergraph, nodes are the items that the user interacts with or publicizes, and edges come from item tags. In **Stage2**, we model user preference based on the inner-item hypergraph to protect user privacy and enhance preference representation. Specifically, we condense the item hypergraph into an interest graph by mapping items into different interests. Given that the distilled interests can be noisy, we propose an interest attention mechanism that leverages GNN architecture to minimize noisy interests. Finally, we pool the interests to model user preference and calculate local gradients. In **Stage3**, each user samples multi-hop neighbors in the inter-user graph and trains the models collaboratively to overcome **CH3**. We propose a mechanism based on the Local Differential Privacy (LDP) named secure gradient-sharing, which proves strong privacy-preserving of users' private data to overcome **CH2**. Specifically, the calculated gradients in Stage 2 are first encoded, then propagated among the neighbourhood, and finally decoded to restore in a noise-free way.
We summarize the main contributions of this paper as follows:
* [leftmargin=*,noitemsep,topsep=0pt]
* We propose a novel decentralized GNN for privacy-preserving recommendations. To the best of our knowledge, this is the first decentralized GNN-based recommender system.
* We propose secure gradient-sharing, a novel privacy-preserving mechanism to publish model gradients, and theoretically prove that it is noise-free and satisfies Renyi differential privacy.
* We conduct extensive experiments on three public datasets, and consistent superiority validates the success of the proposed framework.
## 2. Related Work
In this section, we discuss prior work on GNN and privacy protection, as we propose a decentralized GNN for privacy-preserving recommendations.
### Gnn
Recent studies have extensively investigated the mechanisms and applications of GNN. For mechanisms, we investigate graph convolutional networks and graph pooling. For applications, we investigate centralized GNNs, federated GNNs, and decentralized GNNs.
**Graph convolutional networks.** Most graph convolutional networks focus on pairwise relationships between the nodes in a graph (Krizhevsky et al., 2014; Krizhevsky et al., 2015). However, they neglect the higher-order connectivity patterns beyond the pairwise relationships (Krizhevsky et al., 2014; Goyal et al., 2015). (Krizhevsky et al., 2015) first introduces hypergraph learning as a propagation process to minimize label differences among nodes. HGNN (Krizhevsky et al., 2015) proposes a hypergraph neural network by truncated Chebyshev polynomials of the hypergraph Laplacian. HyperGNN (Krizhevsky et al., 2015) further enhances the capacity of representation learning by leveraging an optional attention module. Our paper leverages HyperGNN without an attention module to aggregate node features and learn node assignment as we model a user's preferences based on his item hypergraph.
**Graph Pooling.** Graph pooling is widely adopted to calculate the entire representation of a graph. Conventional approaches sum or average node embeddings (Krizhevsky et al., 2014; Goyal et al., 2015; Goyal et al., 2015). However, these methods cannot learn the hierarchical representation of a graph. To address this issue, DiffPool (Krizhevsky et al., 2015) first proposes an end-to-end pooling operator, which learns a differentiable soft cluster assignment for nodes at each layer. MinCutPool (Krizhevsky et al., 2015) further formulates a continuous relaxation of the normalized Min-cut problem to compute the cluster assignment. Motivated by DiffPool, our paper designs a different constraint on cluster assignment: one node can be mapped into multiple clusters, but each cluster should be independent.
**Centralized GNNs.** Centralized GNNs have been vastly applied in recommendation scenarios. Leveraging a graph structure can enhance representations for users and items (Krizhevsky et al., 2016) and capture complex user preferences behind their interactions (Krizhevsky et al., 2015). PinSage (Krizhevsky et al., 2015) and LightGCN (Krizhevsky et al., 2015) refine user and item representations via multi-hop neighbors' information. To deal with noisy interactions and node degree bias, SGL(Wang et al., 2016) explores self-supervised learning on a user-item graph and reinforces node representation via self-discrimination. Although these methods achieve superior performance, they cannot be applied in privacy-preserving recommendation scenarios as most user interactions are private. In contrast, our paper designs a decentralized GNN for privacy-preserving recommendations.
**Federated GNNs.** Federated GNNs involve a central server to or-chestrate the training process and leverage an LDP mechanism (Krizhevsky et al., 2015) to protect user privacy. LPGNN(Wang et al., 2016) and GWGNN(Krizhevsky et al., 2015) require clients to publish local features to the server, where feature aggregation is executed. However, these two methods violate the privacy-preserving requirements in recommendation scenarios: local interactions should not be published. FedGNN (Krizhevsky et al., 2015) assumes a trustworthy third party to share user-item interactions securely and adds Gaussian noises to protect user privacy. However, finding a credible third party in real scenarios is difficult and hence limits the application of this method. FeSoG (Wang et al., 2016) shares user features to make recommendations and adds Laplacian noises to preserve users' privacy. However, user features are sensitive data as malicious participants can leverage these data to infer private user interactions (Krizhevsky et al., 2015). Federated GNNs have high communication costs in the central server, causing training speed bottlenecks. In contrast, our method adopts a decentralized training mechanism to improve training efficiency, where each client updates their model asynchronously.
**Decentralized GNNs.** Decentralized GNNs require clients to cooperate in training prediction models. SpreadGNN (Krizhevsky et al., 2015) and DLGNN (Krizhevsky et al., 2015) calculate model gradients with local interactions and
share gradients among clients to update models. D-FedGNN (Wang et al., 2018) introduces Diffie-hellman key exchange to secure communication and utilizes decentralized stochastic gradient descent algorithm (Srivastava et al., 2017) to train models. P2PGNN (Wang et al., 2018) makes local predictions and uses page rank to diffuse predictions. All these methods exchange gradients without using a privacy-preserving mechanism. Consequently, malicious neighbors can infer users' private data from received gradients. In contrast, our method proposes an LDP mechanism to secure shared gradients, protecting user privacy.
### Privacy-preserving mechanisms
Preserving users' privacy is a topic with increasing attention, including the Differential Privacy (DP) (Beng et al., 2015; Li et al., 2016; Li et al., 2017; Wang et al., 2018; Wang et al., 2018), Homomorphic Encryption (Beng et al., 2015; Li et al., 2016; Li et al., 2017), and Secure Multi-party Computation (Zhu et al., 2018; Wang et al., 2018). Among them, Local Differential Privacy, an implementation of DP, has been embraced by numerous machine-learning scenarios (Wang et al., 2018), where noise is added to individual data before it is centralized in a server. (Li et al., 2017) gives a standard definition of \(e\)-DP, where \(\epsilon\) is the privacy budget. (Li et al., 2017) further defines \((\epsilon,\delta)\)-dp to give a relaxation of \(e\)-DP, where \(\delta\) is an additive term and \(1-\delta\) represents the probability to guarantee privacy. To the best of our knowledge, most existing privacy-preserving GNN-based recommender systems (Wang et al., 2018; Wang et al., 2018) define their privacy protection based on \((\epsilon,\delta)\)-DP. However, \((\epsilon,\delta)\)-DP has an explosive privacy budget for iterative algorithms, and hence it cannot guarantee privacy protection for multiple training steps. To solve this problem, we take \((\alpha,\epsilon)\)-RDP (Wang et al., 2018) to guarantee tighter bounds on the privacy budget for multiple training steps. Our work proves that the secure gradient-sharing mechanism satisfies \((1.5,\epsilon)\)-RDP, guaranteeing stronger privacy protection in real scenarios.
## 3. Method
As seen in Figure 1, the framework of DGREC consists of three stages, i.e., graph construction, local gradient calculation, and global gradient passing. The first stage constructs an inter-user graph and an inner-item hypergraph. The second stage models user preference and calculates local gradients. The third stage proposes a novel privacy-preserving mechanism named secure gradient-sharing. Finally, users update their local models with the decoded gradients. DGREC achieves competitive recommendation performance and meets diverse demands for privacy protection.
### Preliminary
**Renyi differential privacy (RDP).** A randomized algorithm \(M\) satisfies \((\alpha,\epsilon)\)-Renyi differential privacy (Wang et al., 2018), if for any adjacent vectors \(x\) and \(x^{\prime}\), it holds that \(D_{\alpha}(M(x)\parallel M(x^{\prime}))\leq\epsilon\), with \(\epsilon\) denoting the privacy budget and \(\alpha\) denoting the order of Renyi divergence. Small values of \(\epsilon\) guarantee higher levels of privacy, whereas large values of \(\epsilon\) guarantee lower levels of privacy. We can define RDP for any \(\alpha\geq 1\).
**Problem formulation.** Assume we have a set of users and items, denoted by \(\mathbb{U}\) and \(\mathbb{I}\). Generally, user \(u\) has a set of neighbor users \(N_{u}\):\(\{u_{1},u_{2},\cdots,u_{m}\}\) and interacted items \(\mathbb{I}_{u}\):\(\{i_{1},i_{2},\cdots,i_{n}\}\), with \(m\) and \(n\) denoting the number of users and items. User \(u\) can decide whether to publicize their interactions, and we let \(\mathbb{I}_{u}^{pub}\) denote his publicized interactions. The decentralized privacy-preserving recommendation aims to recommend items matching user preferences by leveraging the user's interactions and his neighbors' publicized interactions.
### Graph construction
In the first stage, we construct the inter-user graph and inner-item hypergraph. The inter-user graph will support neighbor sampling and secure gradient-sharing in the third stage. Each user trains his recommendation model collaboratively with his multi-hop neighbors, who are more likely to share similar preferences. The inner-item hypergraph will support modeling user preference in the second stage.
**Inter-user graph construction.** Recent studies demonstrate that user friendship or proximity reveals behavior similarity between users (Wang et al., 2018; Wang et al., 2018). Driven by this motivation, we construct a global inter-user graph, utilizing the communication protocol as described in reference (Wang et al., 2018). As delineated in "Section B: Communication Protocol" of the reference, the implemented protocol demonstrates proficiency in handling both static and dynamic situations. Each user can anonymously disseminate messages among multi-hop neighbors by communicating with his 1-hop neighbors. The inter-user graph is defined as \(\mathcal{G}=\{\mathcal{V},\mathcal{E}\}\), where \(\mathcal{V}\) denotes the user set and \(\mathcal{E}\) denotes the edge set. We set \(N_{u}=\{\sigma:(u,v)\in\mathcal{E}\}\) to denote the neighbors of user \(u\).
**Inner-item hypergraph construction.** To make privacy-preserving recommendations, we model user preferences based on limited interactions, as users' features and most interactions are private. By representing user interactions as a graph, it is easier to distinguish his core and temporary interests, as core interests result in frequent and similar interactions. Here, we build an item hypergraph for each user, where a hyperedge connects multiple items (Beng et al., 2015). To stabilize model training, we further leverage tag information to establish item relationships.
The inner-item hypergraph for user \(u\) is defined as \(\mathbb{G}_{u}=\{\mathbb{V}_{u},A_{u}\}\), where \(\mathbb{V}_{u}=\{i\mid i\in\mathbb{I}_{u}\cup\mathbb{I}_{p}^{pub},\,v\in \mathcal{N}_{u}\}\) denotes the item set and \(A_{u}\) is the incidence matrix. Each entry \(A_{u}(i,t)\) indicates whether item \(i\in\mathbb{V}_{u}\) is connected by a hyperedge \(t\in\mathbb{T}\). In our setting, \(\mathbb{T}\) corresponds to a tag set. We set \(A_{u}(i,t)=1\) if item \(i\) has tag \(t\) and \(A_{u}(i,t)=0\) otherwise. We let \(deg(i)\) denote the degree of item \(i\) and \(deg(t)\) denote the degree of hyperedge \(t\), where \(deg(i)=\sum_{t\in\mathbb{T}}A_{u}(i,t)\) and \(deg(t)=\sum_{i\in\mathbb{V}_{u}}A_{u}(i,t)\). We let \(D_{\theta}\in\mathbb{R}^{[\mathbb{V}_{u}]\times[\mathbb{V}_{u}]}\) and \(D_{t}\in\mathbb{R}^{[\mathbb{T}]\times[\mathbb{T}]}\) denote diagonal matrices of the item and hyperedge degrees, respectively.
### Local gradient calculation
In the second stage, we model each user's preference for an item by his inner-item hypergraph and calculate gradients based on ground-truth interactions.
**Modelling user preference.** We model user preference through three steps, i.e., graph condensation, interest pooling, and preference prediction.
**(1) Graph condensation.** One user interaction provides weak signals for his preferences. To model user preferences, an intuitive way is to gather these weak signals into strong ones. Motivated
by DiffPool(Wang et al., 2017), we condense his item hypergraph into an interest graph by learning a soft cluster assignment.
We first aggregate features among items and learn an item assignment by hypergraph convolutional networks. We define the aggregation and assignation process as,
\[E_{u}^{\prime} =D_{\text{o}}^{-1/2}\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{ \mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{ \mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{ \mathbb{ }}}}}}}}}}}}} \mathcal{D}_{t}^{-1}\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{ \mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{ \mathbb{ \mathbb{ \mathbb{ }}}}}}}}}}}} \mathcal{D}_{t}^{-1/2}\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{ \mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ { \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ { \mathbb{ \mathbb{ \mathbb{ \mathbb{ { \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ { \mathbb{ \mathbb{ { \mathbb{ \mathbb{ { \mathbb{ { \mathbb{ \mathbb{ \mathbb{ { \mathbb{ { \mathbb{ { \mathbb{ { \mathbb{ { \mathbb{ \mathbb{ { \mathbb{ { \mathbb{ { \mathbb{ { \mathbb{ { \mathbb{ { \mathbb{ { \mathbb{ { \mathbb{ { \mathbb{ { \mathbb{ { \mathbb{ { \mathbb{ { \mathbb{ { \mathbb{ { \mathbb{{ \mathbb{ { \mathbb{ { \mathbb{ { \mathbb{{ { \mathbb{ { { \mathbb{ { { \mathbb{ { \mathbb{ { { \mathbb{ { { \mathbb{ { { \mathbb{{ { \mathbb{{ { { \mathbb{{ {{ \mathbb{{{{{ \mathbb{{{{{ \mathbb{{{{{ { \mathbb{ { { { {{{{ {\mathbb{{{{ {\mathbb{{{{ { \mathbb{{{{{{ {\mathbb{{{ { \mathbb{{{ { \mathbb{{{ \mathbb{{{{ { \mathbb{{ { \mathbb{ { \mathbb{{ { { \mathbb{{ { { \mathbb{{ { { \mathbb{{{ { \mathbb{{{ { \mathbb{{{ { \mathbb{{{{ { \mathbb{{{ { \mathbb{{{ { \mathbb{{ { { \mathbb{ { { \mathbb{ { { \mathbb{ { { \mathbb{{ { { \mathbb{{ { \mathbb{ { { \mathbb{ { { \mathbb{ { { \mathbb{ { { \mathbb{ { \mathbb{ { { \mathbb{ { { \mathbb{ { { \mathbb{ { \mathbb{ { { \mathbb{ { \mathbb{ { \mathbb{ { { \mathbb{ { \mathbb{ { \mathbb{ { { \mathbb{ { \mathbb{ { \mathbb{ { \mathbb{ { \mathbb{ { \mathbb{ { \mathbb{ { \mathbb{ \mathbb{ \mathbb{ { \mathbb{ \mathbb{ { \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ \mathbb{ {
item \(i\). We define preference prediction as \(\hat{y}_{u,i}=\mathrm{MLP}(e_{u}\parallel e_{i})\), where \(\parallel\) denotes the concatenation function.
**Gradient calculation.** We take Bayesian personalized ranking (BPR) loss (Wang et al., 2017) as the loss function to learn model parameters. It assumes that the observed interactions should be assigned higher prediction values than unobserved ones. The BPR loss \(\mathcal{L}_{u}^{P}\) is calculated as,
\[\mathcal{L}_{u}^{P}=\frac{1}{|\mathbb{I}_{u}|}\sum_{i\in\mathbb{I}_{u},e_{i} \in\mathbb{I}_{u}^{+}}-\mathrm{ln}\sigma(\hat{y}_{u,i}-\hat{y}_{u,i}), \tag{2}\]
where \(\mathbb{I}_{u}^{-}\) denotes the unobserved interactions of user \(u\) and \(\sigma\) denotes the sigmoid function.
Finally, we calculate local loss \(\mathcal{L}_{u}\) for user \(u\) as,
\[\mathcal{L}_{u}=\mathcal{L}_{u}^{P}+\mathcal{L}_{u}^{d}+\lambda||\Theta_{u}|| _{2}, \tag{3}\]
where \(\lambda\) is the weight of L2 regularization and \(\Theta_{u}\) denotes trainable parameters of the recommendation model for user \(u\).
### Global gradient passing
In the third stage, we propose neighbor sampling and secure gradient-sharing based on the inter-user graph to enhance model performance and training efficiency in a privacy-preserving way.
**Neighbor sampling.** An intuitive way of decentralized recommendations is that users collaboratively train models with their neighbors. However, two problems remain unresolved, damaging recommendation performance: (1) Some item embeddings are not fully trained or even not trained, as interacted items are sparse in recommendation scenarios. (2) Different users have different convergence speeds. For example, popular users are more likely to be involved in training. Inadequately-trained models can bias fully-trained models by shared gradients. Thus, we design a sampling strategy to upgrade the neighbor-based collaborative training into the neighborhood-based one.
We describe a neighbor sampling strategy for user \(u\in\mathcal{V}\) in Algorithm 1. The probability for user \(u\) to sample his neighbor \(v\in\mathcal{N}_{u}\) is defined as,
\[p_{u,v}=\frac{\mathcal{L}_{v}/(\ln(cnt_{u}+1)+1)}{\sum_{w\in\mathcal{N}_{u}} \mathcal{L}_{w}/(\ln(cnt_{w}+1)+1)}, \tag{4}\]
where \(\mathcal{L}_{v}\) is the training loss for user \(v\) as calculated in Equation (3). We let \(cnt_{v}\) denote the training iterations for user \(v\). The sampled probability increases with local training loss and decreases with training iterations. Thus, underfit models are more likely to be sampled for training.
We employ \(U\) to denote all sampled users, i.e., \(U=\{v|size(N_{u}^{s})>0\}\). Each sampled user \(v\in U\) simultaneously calculates his local gradients \(g_{v}\) based on the loss defined in Equation (3).
**Secure gradient-sharing.** Inspired by the one-bit encoder (Han et al., 2017), we propose a novel privacy-preserving mechanism to share gradients in the neighborhood efficiently. Unlike the one-bit encoder, our mechanism shares the calculated gradients multiple times and satisfies RDP. The secure gradient-sharing involves three steps, i.e., gradient encoding, gradient propagation, and gradient decoding.
**(1) Gradient encoding.** We first encode the calculated gradients to protect user privacy and minimize communication costs. In the training neighborhood, each user clips his local gradients into \([-\delta,\delta]\) and takes the Bernoulli distribution to sample the mapped encoding for gradients. Let \(\beta\) denote the perturbation strength to protect user privacy. We define the gradient encoding for user \(u\) as,
\[g_{u}^{*}\sim 2*\mathrm{Bernoulli}(\frac{1}{e^{\beta}+1}+\frac{(e^{\beta}-1)( \mathrm{clip}(g_{u},\delta)+\delta)}{2(e^{\beta}+1)\delta})-1, \tag{5}\]
where \(g_{u}^{*}\) denotes the encoded gradients of user \(u\) with each element either equalling to \(-1\) or \(1\). Large gradients are more likely to be mapped to \(1\), whereas small gradients are more likely to be mapped to \(0\).
Theorem 1 ().: _For a gradient with a size of \(n_{s}\), our secure gradient-sharing mechanism satisfies \((\ref{eq:1},\epsilon)\)-RDP, where \(\epsilon=2n_{s}log(\frac{1.5\pi\delta(e^{\beta}+1)}{2(e^{\beta}-1)}+\frac{e^{- \epsilon\cdot\beta}\epsilon+1\beta t}{e^{\beta}+1})\)._
Proof.: Let \(\mathcal{M}(g)\) be the secure gradient-sharing as described in Equation (5). We need to show that for any two input gradients \(g\) and \(g^{\prime}\), we have \(D_{\alpha}(M(g)\parallel M(g^{\prime}))\leq\epsilon\). We set \(z=e^{\beta}+1\) to simplify the description. For two continuous distributions defined over the real interval with densities \(p\) and \(q\), we have
\[p(g)=\mathrm{P}[\mathcal{M}(g)=1]=\frac{\delta z+(z-2)g}{2\delta z},\]
\[q(g)=\mathrm{P}[\mathcal{M}(g)=-1]=\frac{\delta z-(z-2)g}{2\delta z}.\]
Given that we clip gradients \(g\) into \([-\delta,\delta]\), we evaluate \(D_{\alpha}(M(g)\parallel M(g^{\prime}))\) separately over three intervals, i.e., \((-\infty,-\delta)\), \([-\delta,\delta]\), and \((\delta,+\infty)\). Let \(n_{s}\) denote the size of the gradient. We have
\[D_{\alpha}(M(g)\parallel M(g^{\prime}))=\frac{1}{\alpha-1}\sum_{i=0}^{ns}log \int_{-\infty}^{\infty}p(g_{i})^{\alpha}q(g_{i})^{1-\alpha}\mathrm{d}g_{i}.\]
For intervals \((-\infty,-\delta)\) and \((\delta,+\infty)\), we have
\[\int_{-\infty}^{-\delta}p(g)^{\alpha}q(g)^{1-\alpha}\,\mathrm{d}g =\frac{(z-1)^{1-\alpha}}{z},\] \[\int_{\delta}^{\infty}p(g)^{\alpha}q(g)^{1-\alpha}\,\mathrm{d}g =\frac{(z-1)^{\alpha}}{z}.\]
For intervals \([-\delta,\delta]\), we set \(x=(z-2)g\) and \(y=\delta z\) and have
\[\int_{-\delta}^{\delta}p(g)^{\alpha}q(g)^{1-\alpha}\,\mathrm{d}g =\frac{1}{2\delta z(z-2)}\int_{-y=2\delta}^{y-2\delta}(x+y)^{\alpha }(-x+y)^{1-\alpha}\,\mathrm{d}x\] \[\leq\frac{1}{2\delta z(z-2)}\int_{-y}^{y}(x+y)^{\alpha}(-x+y)^{1- \alpha}\,\mathrm{d}x.\]
Given the reverse chain rule, we set \(t=(x+y)/2y\) and have
\[\int_{-\delta}^{\delta}p(g)^{\alpha}q(g)^{1-\alpha}\,\mathrm{d}g =\frac{1}{z-2}\int_{0}^{1}(2yt)^{\alpha}(-2yt+2y)^{1-\alpha}\, \mathrm{d}t\] \[=\frac{2\delta z}{z-2}\int_{0}^{1}(t)^{\alpha}(-t+1)^{1-\alpha}\, \mathrm{d}t.\]
According to the definition of beta and gamma functions, we have
\[\int_{-\delta}^{\delta}p(g)^{\alpha}q(g)^{1-\alpha}\,\mathrm{d}g =\frac{2\delta z}{z-2}\mathcal{B}(\alpha+1,2-\alpha)\] \[=\frac{2\delta z}{z-2}\frac{\Gamma(\alpha+1)\Gamma(2-\alpha)}{ \Gamma(3)}\] \[=\frac{2\delta z\alpha(\alpha-1)}{(z-2)}\frac{\Gamma(\alpha-1) \Gamma(1-(\alpha-1))}{\Gamma(3)}.\]
By the reflection formula for the gamma function, we have
\[\int_{-\delta}^{\delta}p(g)^{\alpha}q(g)^{1-\alpha}\,\mathrm{d}g=\frac{2\delta \varepsilon\alpha(\alpha-1)\pi}{(z-2)\Gamma(3)sin((\alpha-1)\pi)}.\]
To calculate \(D_{\alpha}(M(g)\parallel M(g^{\prime}))\), here we set \(\alpha=1.5\) and have
\[D_{\alpha}(M(g)\parallel M(g^{\prime}))=2n_{s}log(\frac{1.5\pi\delta(e^{\beta }+1)}{2(e^{\beta}-1)}+\frac{e^{-0.5\beta}+e^{1.5\beta}}{e^{\beta}+1})\leq\epsilon,\]
which concludes the proof of (1.5, \(\epsilon\))-RDP.
Given \(T\) training iterations, the privacy budget of the secure gradient-sharing mechanism is bounded.
Proof.: Given \(T\) training iterations, the secure gradient-sharing satisfies (1.5, \(\epsilon T\))-RDP according to the sequential composition (Hendle and Krizhevsky, 2014). One property of RDP (Krizhevsky, 2014) is that if \(\mathcal{M}(g)\) satisfies \((\alpha,\epsilon)\)-RDP, for \(\forall\gamma>0\), \(\mathcal{M}(g)\) satisfies \((\epsilon^{\prime},\gamma)\)-DP with \(\epsilon^{\prime}=\epsilon+\frac{log(1/\gamma)}{\alpha-1}\). Thus, the secure gradient-sharing mechanism satisfies \((\epsilon T+2log(1/\gamma),\gamma)\)-DP. We can further choose large \(\gamma\) to reduce the privacy budget, which concludes that the privacy budget is bounded.
**(2) Gradient propagation.** We devise a topology-based diffusion method named _gradient propagation_ to share the encoded gradients among the neighborhood. The gossip training protocol has been widely applied in decentralized learning (Hendle and Krizhevsky, 2014; Krizhevsky, 2014), where users exchange gradients with their 1-hop neighbors, given its robustness to device breakdown. However, the encoded gradients are highly inaccurate in our setting, as the number of averaged gradients is insufficient to reduce noise.
To solve the above issue, we extend the gossip training protocol to disseminate gradients among the neighborhood. Algorithm 2 describes the process of gradient propagation. Similar to the message-passing architecture in GNN (Ghezhi et al., 2017; Wang et al., 2018), each user in the neighborhood sends and receives gradients in parallel (line 5 and 6). Given the longest path in the training neighborhood is \(2H\), the encoded gradients are disseminated among all users after \(2H\) steps.
**(3) Gradient decoding.** Conventional DP mechanisms (Zheng et al., 2017; Wang et al., 2018; Wang et al., 2018) protect user privacy at the cost of recommendation performance as it introduces noises to model training. We propose gradient decoding to decode received gradients in a noise-free way.
```
0: Encoded gradients \(g_{u}^{*}\) for user \(u\in U\), number of sampling hops \(H\), sampling neighbors \(\mathcal{N}_{u}^{\mathrm{a}}\) for \(u\in U\)
0: Aggregated gradients \(\widetilde{g}_{u}\) for user \(u\in U\)
1:for user \(u\in U\)paralleldo
2:\(\widetilde{g}_{u}\leftarrow\{g_{u}^{*}\}\)
3:for\(h\gets 1\) to \(2H\)do
4:for user \(u\in U\)paralleldo
5: send \(\widetilde{g}_{u}\) to neighbor \(v\in\mathcal{N}_{u}^{\mathrm{a}}\)
6:\(\widetilde{g}_{u}\leftarrow\widetilde{g}_{u}\cup\widetilde{g}_{v},v\in \mathcal{N}_{u}^{\mathrm{a}}\)
7:return\(\widetilde{g}_{u}\) for user \(u\in U\)
```
**Algorithm 2** Gradient propagation
After gradient propagation, each user locally decodes the received gradients and updates their model with the decoded gradients by stochastic gradient descent (SGD) optimizer. Here, we describe the decoding process for user \(u\) as,
\[\tilde{g}_{u}=\delta(e^{\beta}+1)\mathrm{mean}(\widetilde{g}_{u})/(e^{\beta}- 1). \tag{6}\]
The secure gradient-sharing mechanism is unbiased.
Proof.: To prove the secure gradient-sharing mechanism is noise-free, we need to verify that \(E(\tilde{g})=E(g)\). Since the gradients are encoded from the Bernoulli distribution in Equation (5), we have
\[E(g^{*})=\frac{e^{\beta}-1}{e^{\beta}+1}\frac{E(g)+\delta}{\delta}-\frac{e^{ \beta}-1}{e^{\beta}+1}=\frac{(e^{\beta}-1)E(g)}{(e^{\beta}+1)\delta}.\]
Combining Equation (6), we then have
\[E(\tilde{g}) =\frac{\delta(e^{\beta}+1)}{e^{\beta}-1}E(g^{*})\] \[=\frac{\delta(e^{\beta}+1)}{e^{\beta}-1}\frac{(e^{\beta}-1)E(g)}{ (e^{\beta}+1)\delta}\] \[=E(g),\]
which concludes that the secure gradient-sharing mechanism is noise-free.
The secure gradient-sharing mechanism can approach the same convergence speed as centralized SGD.
Proposition 2 guarantees the convergence of our proposed method, and please refer to paper (Ghezhi et al., 2017) for its detailed proof.
## 4. Analysis on Communication Cost
Our proposed method has the lowest communication cost among all competitors. Here, we provide a comparison of each method's communication cost theoretically. Note that each user samples \(n_{u}\) users from his neighbors, the training process involves \(H\)-hop users, and the gradient to be sent has a size of \(n_{s}\). (1) Our method requires clients to cooperate during model training and leverages a gradient encoder to minimize the communication cost. Each user has the worst communication cost as \(2Hn_{s}(1-n_{u}^{H})/(1-n_{u})\). (2) Federated methods involve a central server to orchestrate the training process. They have a communication cost as \(d_{r}n_{s}(1-n_{u}^{H})/(1-n_{u})\), where \(d_{r}\) is the number of bits to express a real number, e.g., (64). (3) For decentralized methods, each user has a communication cost
as \(d_{r}n_{s}(1-n_{u}^{H})/(1-n_{u})\). In summary, our method consistently beats federated and decentralized methods since hop number \(H\) is generally a small number, e.g., 3.
## 5. Experiment
This section conducts empirical studies on three public datasets to answer the following four research questions.
* **RQ1:** How does DGREC perform, compared with the state-of-the-art methods (Section 5.2)?
* **RQ2:** How do existing methods perform under diverse demands for privacy protection (Section 5.3)?
* **RQ3:** What are the effects of different components in DGREC (Section 5.4)?
* **RQ4:** What are the impacts of different parameters on DGREC (Section 5.5)?
### Experimental settings
**Datasets.** We conduct experiments on three public datasets, i.e., _Flixster_, _Book-crossing_, and _Weeplaces_. These datasets are collected with permissions and preprocessed through data anonymization. _Flixster_ is a movie site where people meet others with similar movie tastes (Wang et al., 2017), and it provides user friendships and item relationships. _Book-crossing_ contains a crawl from a book sharing community (Zhu et al., 2018). We connect users from the same area and build item relationships according to authors and publishers. To ensure the dataset quality, we retain users and items with at least 18 interactions (Wang et al., 2017). _Weeplaces_ is a website that visualizes user check-in activities in location-based social networks (Wang et al., 2017). We derive item relationships from their categories and remove users and items with less than 10 interactions. We show the statistics of these datasets after processing them in Table 1. For each dataset, we randomly select 80% of the historical interactions of each user as the training dataset and treat the rest as the test set. We randomly select 10% of interactions in the training dataset as the validation set to tune hyperparameters.
**Comparison methods.** The comparison methods involve three types of recommendation methods, i.e., centralized methods, federated methods, and decentralized methods.
**(1) Centralized methods.** We implement three centralized recommendation methods, namely **NeuMF**(Wang et al., 2017), **LightGCN**(Chen et al., 2018), and **HyperGNN**(Chen et al., 2018). NeuMF equips MLP to capture user preference on the item. LightGCN and HyperGNN adopt graph neural networks. The difference between LightGCN and HyperGNN is that LightGCN leverages pairwise relationships between users and items, whereas HyperGNN utilizes higher-order relationships.
**(2) Federated methods.** We implement two federated recommendation methods, i.e., **FedRec**(Chen et al., 2018) and **FedGNN**(Wang et al., 2019). FedRec is a federated matrix factorization (MF) method. FedGNN models high-order user-item relationships and adds Gaussian noises to protect user privacy.
**(3) Decentralized methods.** We implement two decentralized recommendation methods named **DMF**(Chen et al., 2018) and **DLGNN**. DMF is a decentralized MF method. DLGNN is a decentralized implementation of LightGCN. Each user trains his model collaboratively with his neighbors by sharing gradients in these two methods.
_To give a fair comparison, we provide side information for all methods_. (1) For MF methods, we leverage side information as regularization on embeddings, encouraging users and items to share similar embeddings with their neighbors (Wang et al., 2019). (2) For GNN methods, we leverage side information to establish user-user and item-item relationships in the graph (Wang et al., 2019).
**Evaluation metrics.** For each user in the test set, we treat all the items that a user has not interacted with as negative. Each method predicts each user's preferences on all items except the positive ones in the training set. To evaluate the effectiveness of each model, we use two widely-adopted metrics, i.e., recall@K and ndcg@K (Wang et al., 2019; Wang et al., 2019). In our experiments, we set \(K=20\).
**Parameter settings.** For all methods, we use SGD as the optimizer. We search the learning rate in \([0.001,0.005,0.01]\), the L2 regularization coefficient in \([10^{-3},10^{-2},10^{-1}]\), and the embedding size in \([16,32,48,64]\). For federated methods, we set the gradient clipping threshold \(\delta\) to \(0.1\) and privacy-preserving strength \(\epsilon\) to \(1\). For our method, we set \(\delta\) to \(0.1\) and perturbation strength \(\beta\) to \(1\), which satisfies \((1.5,0.54)\)-RDP. For GNN methods, we search the number of layers in \([1,2,3]\). For NeuMF, we search the dropout in \([0.0,0.1,0.2,0.3,0.4,0.5]\). For our method, we search the interest number \(n_{i}\) in \([6,12,18]\) and interest dimension \(d_{i}\) in \([5,10,15,30]\), respectively. Besides, we set sampling hops \(H=4\) and sampling numbers \(n_{u}=3\).
We optimize all hyperparameters carefully through grid search for all methods to give a fair comparison. We repeat the experiment with different random seeds five times and report their average results. The consumed resources vary with the methods and hyper-parameters. We simulate a decentralized environment on a single device, establishing individual recommendation models for each user on that device.
### Overall performance comparisons (RQ1)
We compare each method in conventional recommendation scenarios where all user interactions are public. We show the comparison results in Table 2. From it, we observe that: (1) Centralized methods achieve superior performance on all datasets as they can leverage all user interactions to make recommendations. For protecting user privacy, federated and decentralized methods achieve competitive performance, demonstrating the feasibility of making accurate recommendations without violating user privacy. (2) GNN methods generally outperform MF methods. FedGNN, DLGNN, and DGREC outperform the centralized MF method, i.e., NeuMF, on all datasets. This result demonstrates the superiority of GNN in recommendations, as it can vastly enhance user and item representations. (3) DGREC achieves the best recommendation performance among all privacy-preserving methods. It performs close to LightGCN and
\begin{table}
\begin{tabular}{l c c c} \hline \hline Dataset & Flixster & Book-crossing & Weeplaces \\ \hline \# Users & 3,060 & 3,060 & 8,720 \\ \hline \# Items & 3,000 & 5,240 & 7,407 \\ \hline \# Interactions & 48,369 & 222,287 & 546,781 \\ \hline Sparsity & 0.9946 & 0.9860 & 0.9915 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Statistics of datasets.
HyperGNN, which are two centralized GNNs. These two results demonstrate that DGREC can protect user privacy cost-effectively.
### Generalization research (RQ2)
We evaluate how different methods meet diverse demands for privacy protection. Specifically, we set different publicized interaction ratios for each user, i.e., the proportion of publicized interactions to total interactions. (1) Centralized and Decentralized methods can only utilize publicized interactions to build graphs and train models. (2) Federated methods can only utilize publicized interactions to build the graph but all interactions to train models.
We show the comparison results in Figure 2. From it, we observe that: (1) All centralized and decentralized methods have performance degradation as publicized interaction ratio decreases. In contrast, DGREC and federated methods can keep certain recommendation performance. (2) When we set publicized interaction ratio below 1, DGREC achieves consistent superiority on all datasets, demonstrating its ability to meet diverse demands for privacy protections.
### Ablation study (RQ3)
We verify the effectiveness of different components in DGREC on _Book-Crossing_ and _Weeplaces_. (1) **w/non-item graph**, **w/non-neighbor**, and **w/non-item hypergraph** are three variants of constructing item hypergraph. w/non-item graph averages embeddings of interacted items to model user preferences. w/non-neighbor constructs the item hypergraph without neighbors' publicized interactions. w/non-item hypergraph constructs an item-tag bipartite graph for each user. (2) **w/non-attention** and **w/non-person** are two variants of modeling user preference. w/non-attention replaces the interest attention with attention-pooling (Wang et al., 2018). w/non-person removes the Pearson loss defined in Equation (1). (3) **w/non-sharing** is the variant of secure gradient-sharing, which adds zero-mean Laplacian noise to protect user privacy in place of gradient encoding and decoding.
We show the comparison results of the ablation study in Table 3. From it, we observe that: (1) All proposed components are indispensable in DGREC. Constructing an item graph and utilizing neighbors' publicized interactions are two dominant reasons for performance improvement. The former contributes to distinguishing a user's core and temporary interests, whereas the latter leverages user behavior similarity. (2) Constructing an item-tag bipartite graph cannot achieve competitive performance, especially for the datasets with the most user interactions, i.e., _Weeplaces_. Adding tag nodes hinders modeling user preference as tags are not related to the main task. (3) Utilizing interest attention and Pearson loss increases model performance, as these two methods can
\begin{table}
\begin{tabular}{c|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{Flixster} & \multicolumn{2}{c|}{Book-Crossing} & \multicolumn{2}{c}{Weeplaces} \\ & recall & ndcg & recall & ndcg & recall & ndcg \\ \hline NeuMF & 3.28 \(\pm\) 0.24 & 2.68 \(\pm\) 0.20 & 6.70 \(\pm\) 0.28 & 3.76 \(\pm\) 0.23 & 15.85 \(\pm\) 0.26 & 10.15 \(\pm\) 0.12 \\ LightGCN & 3.47 \(\pm\) 0.41 & 3.13 \(\pm\) 0.27 & **9.94 \(\pm\) 0.23** & **5.49 \(\pm\) 0.14** & 18.78 \(\pm\) 0.10 & 12.25 \(\pm\) 0.07 \\ HyperGNN & **3.89 \(\pm\) 0.34** & **3.49 \(\pm\) 0.30** & 8.36 \(\pm\) 0.41 & 4.34 \(\pm\) 0.24 & **20.17 \(\pm\) 0.28** & **13.12 \(\pm\) 0.21** \\ \hline FedRec & 3.24 \(\pm\) 0.46 & 2.62 \(\pm\) 0.34 & 6.32 \(\pm\) 0.39 & 3.50 \(\pm\) 0.32 & 15.18 \(\pm\) 0.35 & 9.27 \(\pm\) 0.25 \\ FedGNN & 3.38 \(\pm\) 0.53 & 3.02 \(\pm\) 0.36 & 7.40 \(\pm\) 0.44 & 3.89 \(\pm\) 0.33 & 17.29 \(\pm\) 0.29 & 10.84 \(\pm\) 0.13 \\ \hline DMF & 3.15 \(\pm\) 0.20 & 2.52 \(\pm\) 0.17 & 6.56 \(\pm\) 0.36 & 3.62 \(\pm\) 0.27 & 15.46 \(\pm\) 0.22 & 9.77 \(\pm\) 0.14 \\ DLGNN & 3.30 \(\pm\) 0.37 & 2.84 \(\pm\) 0.28 & 7.73 \(\pm\) 0.31 & 4.08 \(\pm\) 0.24 & 18.50 \(\pm\) 0.26 & 11.89 \(\pm\) 0.19 \\ \hline
**DGREC** & **3.75 \(\pm\) 0.41** & **3.27 \(\pm\) 0.25** & **9.68 \(\pm\) 0.35** & **5.13 \(\pm\) 0.21** & **19.91 \(\pm\) 0.31** & **12.76 \(\pm\) 0.18** \\ \hline \hline \end{tabular}
\end{table}
Table 2. Performance comparison of DGREC with state-of-the-art methods. In one column, the bold values correspond to the methods with the best and runner-up performances.
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline & \multicolumn{2}{c|}{Book-Crossing} & \multicolumn{2}{c}{Weeplaces} \\ & recall & ndcg & recall & ndcg \\ \hline w/non-item graph & 6.98 & 3.83 & 16.18 & 10.45 \\ w/non-neighbor & 6.77 & 3.61 & 17.07 & 10.71 \\ w/non-item hypergraph & 9.20 & 4.66 & 17.69 & 11.17 \\ \hline w/non-attention & 9.19 & 4.78 & 19.35 & 12.43 \\ w/non-pearson & 9.41 & 4.84 & 18.53 & 11.88 \\ \hline w/non-sharing & 8.17 & 4.11 & 18.46 & 11.96 \\ \hline
**DGREC** & **9.68** & **5.13** & **19.91** & **12.76** \\ \hline \hline \end{tabular}
\end{table}
Table 3. Performance on all variants of DGREC.
Figure 2. Performance with different ratios of public interactions.
mitigate damage from noisy and redundant interests. (4) Adding Laplacian noise to protect user privacy causes performance degradation, whereas secure gradient-sharing is noise-free and hence retains model performance.
### Parameter analyses (RQ4)
We first evaluate the number of recommended items \(K\) for model performance on _Weeplaces_ and depict the results in Figure 4. We can observe that the performance of all models increases with \(K\) and DGREC achieves runner-up performances for all choices of \(K\).
We then evaluate the impacts of different parameters on graph condensation, neighbor sampling, and secure gradient-sharing on _Book-Crossing_ and _Weeplaces_. We give the results of parameter analyses in Figure 3. From it, we observe that: (1) For graph condensation (number of interests \(n_{i}\) and interest dimension \(d_{i}\)), model performance increases with \(d_{i}\) but decreases with \(n_{i}\). A large interest dimension generally results in better interest representations, but redundant interests are bad for model convergence. (2) For neighbor sampling (sampling hops \(H\) and sampling numbers \(n_{u}\)), model performance increases with \(H\) and \(n_{u}\). When we set \(H\) to 1, our gradient propagation is degraded to the gossip training mechanism. Although users and their 1-hop neighbors are more likely to share similar preferences, involving 1-hop neighbors for model training cannot achieve competitive results. The result motivates us to form a more extensive neighborhood to train recommendation models. (3) For secure gradient-sharing mechanism (clip value \(\delta\) and perturbation strength \(\beta\)), our model performance increases with \(\delta\) and \(\beta\). However, setting larger \(\delta\) and \(\beta\) consumes a greater privacy budget, decreasing privacy-preserving strength. In our scenario, setting \(\delta=0.1\) and \(\beta=1\) is sufficient to achieve good results.
## 6. Conclusion
Under strict data protection rules, online platforms and large corporations are now precluded from amassing private user data to generate accurate recommendations. Despite this, the imperative need for recommender systems remains, particularly in mitigating the issue of information overload. The development and refinement of privacy-preserving recommender systems receive increasing attention.
In this paper, we propose DGREC for privacy-preserving recommendation, which achieves superior recommendation performance and provides strong privacy protection. DGREC allows users to publicize their interactions freely, which includes three stages, i.e., graph construction, local gradient calculation, and global gradient passing. In future work, we will extend our research to untrusted environments, where other clients may be semi-honest and malicious, and study cryptography techniques, e.g., zero-knowledge proof, to solve the issue. We will also investigate defence methods against poisoning attacks.
## Acknowledgment
This work is supported by National Key R&D Program of China (2022YFB4501500, 2022YFB4501504).
Figure 4. Performance with different numbers of recommended items.
Figure 3. Performance with different model parameters. |
2305.15961 | Quantifying the Intrinsic Usefulness of Attributional Explanations for
Graph Neural Networks with Artificial Simulatability Studies | Despite the increasing relevance of explainable AI, assessing the quality of
explanations remains a challenging issue. Due to the high costs associated with
human-subject experiments, various proxy metrics are often used to
approximately quantify explanation quality. Generally, one possible
interpretation of the quality of an explanation is its inherent value for
teaching a related concept to a student. In this work, we extend artificial
simulatability studies to the domain of graph neural networks. Instead of
costly human trials, we use explanation-supervisable graph neural networks to
perform simulatability studies to quantify the inherent usefulness of
attributional graph explanations. We perform an extensive ablation study to
investigate the conditions under which the proposed analyses are most
meaningful. We additionally validate our methods applicability on real-world
graph classification and regression datasets. We find that relevant
explanations can significantly boost the sample efficiency of graph neural
networks and analyze the robustness towards noise and bias in the explanations.
We believe that the notion of usefulness obtained from our proposed
simulatability analysis provides a dimension of explanation quality that is
largely orthogonal to the common practice of faithfulness and has great
potential to expand the toolbox of explanation quality assessments,
specifically for graph explanations. | Jonas Teufel, Luca Torresi, Pascal Friederich | 2023-05-25T11:59:42Z | http://arxiv.org/abs/2305.15961v1 | # Quantifying the Intrinsic Usefulness of
###### Abstract
Despite the increasing relevance of explainable AI, assessing the quality of explanations remains a challenging issue. Due to the high costs associated with human-subject experiments, various proxy metrics are often used to approximately quantify explanation quality. Generally, one possible interpretation of the quality of an explanation is its inherent value for teaching a related concept to a student. In this work, we extend artificial simulatability studies to the domain of graph neural networks. Instead of costly human trials, we use explanation-supervisable graph neural networks to perform simulatability studies to quantify the inherent _usefulness_ of attributional graph explanations. We perform an extensive ablation study to investigate the conditions under which the proposed analyses are most meaningful. We additionally validate our method's applicability on real-world graph classification and regression datasets. We find that relevant explanations can significantly boost the sample efficiency of graph neural networks and analyze the robustness towards noise and bias in the explanations. We believe that the notion of usefulness obtained from our proposed simulatability analysis provides a dimension of explanation quality that is largely orthogonal to the common practice of faithfulness and has great potential to expand the toolbox of explanation quality assessments, specifically for graph explanations.
Keywords:Graph Neural Networks Explainable AI Explanation Quality Simulatability Study
## 1 Introduction
Explainable AI (XAI) methods are meant to provide explanations alongside a complex model's predictions to make its inner workings more transparent to human operators to improve trust and reliability, provide tools for retrospective model analysis, as well as to comply with anti-discrimination laws [6]. Despite
recent developments and a growing corpus of XAI methods, a recurring challenge remains the question of how to assess the quality of the generated explanations. Since explainability methods aim to improve human understanding of complex models, Doshi-Velez and Kim [6] argue that ultimately the quality of explanations has to be assessed in a human context. To accomplish this, the authors propose the idea of simulatability studies. In that context, human subjects are tasked to simulate the behavior of a machine-learning model given different amounts of information. While a control group of participants receives only the model input-output information, the test group additionally receives the explanations in question. If, in that case, the test group performs significantly better at simulating the behavior, the explanations can be assumed to contain information useful to human understanding of the task. However, human trials such as this are costly and time-consuming, especially considering the number of participants required to obtain a statistically significant result. Therefore, the majority of XAI research is centered around more easily available proxy metrics such as explanation sparsity and faithfulness.
While proxy metrics are an integral part of the XAI evaluation pipeline, we argue that the quantification of usefulness obtained through simulatability studies is an important next step toward comparing XAI methods and thus increasing the impact of explainable AI. Recently, Pruthi _et al._[21] introduce the concept of _artificial simulatability studies_ as a trade-off between cost and meaningfulness. Instead of using human subjects, the authors use explanation-supervisable neural networks as participants to conduct simulatability studies for natural language processing tasks.
In this work, we extend the concept of artificial simulatability studies to the domain of graph neural networks and specifically node and edge attributional explanations thereof. This application has only been enabled through the recent development of sufficiently explanation-supervisable graph neural network approaches [26]. We will henceforth refer to this artificial simulatability approach as the student-teacher analysis of explanation quality: The explanations in question are considered to be the "teachers" that are evaluated on their effectiveness of communicating additional task-related information to explanation-supervisable "student" models. We show that, under the right circumstances, explanation supervision leads to significantly improved main task prediction performance w.r.t. to a reference. We first conduct an extensive ablation study on a specifically designed synthetic dataset to highlight the conditions under which this effect can be optimally observed. Most importantly, we find that the underlying student model architecture has to be sufficiently capable to learn explanations during explanation-supervised training. Our experiments show, that this is especially the case for the self-explaining MEGAN architecture, which was recently introduced by Teufel _et al._[26].
Additionally, we find that the target prediction problem needs to be sufficiently challenging to the student models to see a significant effect. We can furthermore show that while ground truth explanations cause an increase in performance,
deterministically incorrect/adversarial explanations cause a significant decrease in performance. In the same context, random explanation noise merely diminishes the benefit of explanations, but neither causes a significant advantage nor a disadvantage.
Finally, we validate the applicability of our method on explanations for one real-world molecular classification and one molecular regression dataset.
## 2 Related Work
**Simulatability Studies.** Doshi-Velez and Kim [6] introduce the concept of simulatability studies, in which human participants are asked to simulate the forward predictive behavior of a given model. Explanations about the model behavior should be considered useful if a group of participants with access to these explanations performs significantly better than a control group without them. Such studies are only rarely found in the growing corpus of XAI literature due to the high effort and cost associated with them. Nonetheless, some examples of such studies can be found. Chandrasekaran _et al._[4] for example conduct a simulatability study for a visual question answering (VQA) task. The authors investigate the effect of several different XAI methods such as GradCAM and attention among other aspects. They find no significant performance difference for participants when providing explanations. Hase and Bansal [10] conduct a simulatability study for a sentiment classification task. They can only report significant improvements for a small subset of explanation methods. Lai _et al._[14, 13] conduct a simulatability study for a deception detection task. Unlike previously mentioned studies, the authors ask participants to predict ground truth labels instead of simulating a model's predictions. Among different explanation methods, they also investigate the effects of other assistive methods on human performance, such as procedurally generated pre-task tutorials and real-time feedback. The study shows that real-time feedback is crucial to improve human performance. In regard to explanations, the authors find that especially simplistic explanations methods seem to be more useful than more complicated deep-learning-based ones and that providing the polarity of attributional explanations is essential.
Beyond the cost and effort associated with human trials, previous studies report various additional challenges when working with human subjects. One issue seems to be the limited working memory of humans, where participants report forgetting previously seen relevant examples along the way. Another issue is the heterogeneity of participants' abilities, which causes a higher variance in performance results, necessitating larger sample sizes to obtain statistically significant results. Overall, various factors contribute to such studies either not observing any effect at all or reporting only on marginal explanation benefits.
One possible way to address this is proposed by Arora _et al._[2], who argue to rethink the concept of simulatability studies itself. In their work, instead of merely using human subjects as passive predictors, the participants are encouraged to interactively engage with the system. In addition to guessing the model
prediction, participants are asked to make subsequent single edits to the input text with the goal of maximizing the difference in model confidence. The metric of the average confidence deviation per edit can then also be seen as a measure of human understanding of the model's inner workings. The authors argue that such an explorative and interactive study design is generally more suited to the strengths of human subjects and avoids their respective weaknesses.
Another approach is represented by the emergent idea of _artificial simulatability studies_, which generally aim to substitute human participants in these kinds of studies with machine learning models that are able to learn from explanations in a similar manner. There exist early variations of this basic idea [11, 27], for which conceptional problems have been pointed out [21]. Most notably, some methods expose explanations during test time, which may cause label leakage. Recently, Pruthi _et al._[21] devise a method that does not expose explanations during test time by leveraging explanation-supervised model training. They are able to show a statistically significant test performance benefit for various explanation methods, as well as for explanations derived from human experts in natural language processing tasks. In our work, we build on the basic methodology proposed by Pruthi _et al._ and use explanation-supervisable student models to avoid the label-leakage problem. Furthermore, we extend their basic approach toward a more rigorous method. The authors consider the _absolute_ performance of the explanation supervised student by itself as an indicator of simulatability. We argue that, due to the stochastic nature of neural network training, potential simulatability benefits should only be considered on a statistical level obtained through multiple independent repetitions, only _relative_ to a direct reference, and verified by tests of statistical significance.
#### 2.0.1 Explanation Supervision for GNNs
Artificial simulatability studies, as previously discussed, require student models which are capable of _explanation supervision_. This means that it should be possible to directly train the generated explanations to match some given ground truth explanations during the model training phase. Explanation supervision has already been successfully applied in the domains of image processing [16] and natural language processing [3]. However, only recently was the practice successfully adapted to the domain of graph neural networks as well. First, Gao _et al._[8] propose the GNES framework, which aims to use the differentiable nature of various existing post-hoc explanation methods such as GradCAM and LRP to perform explanation supervised training. Teufel _et al._[26] on the other hand introduce the MEGAN architecture which is a specialized attention-based architecture showing especially high potential for explanation-supervision. To the best of our knowledge, these two methods remain the only existing methods for explanation-supervision of graph _attributional_ explanations until now.
In addition to attributional explanations, several other types of explanations have been introduced. Noteworthy examples are prototype-based explanations [23] and concept-based explanations [19]. In the realm of prototype explanations, Zhang _et al._[28] and Dai and Wang [5] introduce self-explaining prototype-based
graph neural networks, although it has not yet been demonstrated if and how explanation-supervision could be applied to them. For concept-based explanations, on the other hand, Magister _et al._[18] demonstrate explanation supervision, opening up the possibility to extend artificial simulatability studies to explanation modalities beyond simple attributional explanations as well.
## 3 Student-Teacher Analysis of Explanation Quality
Simulatability studies aim to assess how useful a set of explanations is in improving human understanding of a related task. To offset the high cost and uncertainty associated with human-subject experiments, Pruthi _et al._[21] introduce artificial simulatability studies, which substitute human participants with explanation-aware neural networks, for natural language processing tasks. In this section, we describe our extension of this principle idea to the application domain of graph neural networks and introduce the novel STS metric which we use to quantify the explanation-induced performance benefit.
We assume a directed graph \(\mathcal{G}=(\mathcal{V},\mathcal{V})\) is represented by a set of node indices \(\mathcal{V}=\mathbb{N}^{V}\) and a set of edges \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\), where a tuple \((i,j)\in\mathcal{E}\) denotes a directed edge from node \(i\) to node \(j\). Every node \(i\) is associated with a vector of initial node features \(\mathbf{h}_{i}^{(0)}\in\mathbb{R}^{N_{0}}\), combining into the initial node feature tensor \(\mathbf{H}^{(0)}\in\mathbb{R}^{V\times N_{0}}\). Each edge is associated with an edge feature vector \(\mathbf{u}_{ij}^{(0)}\in\mathbb{R}^{M}\), combining into the edge feature tensor \(\mathbf{U}\in\mathbb{R}^{E\times M}\). Each graph is also annotated with a target value vector \(\mathbf{y}^{\text{true}}\in\mathbb{R}^{C}\), which is either a one-hot encoded vector for classification problems or a vector of continuous values for regression problems. For each graph exists node and edge attributional explanations in the form of a node importance tensor \(\mathbf{V}\in[0,1]^{V\times K}\) and an edge importance tensor
Figure 1: Illustration of the student teacher training workflow as well as the setting of our artificial simulatability study.
\(\mathbf{E}\in[0,1]^{E\times K}\) respectively. \(K\) is the number of explanation channels and is usually equal to the size \(C\) of the target vector, meaning that for every target value each element of the input graph is annotated with a 0 to 1 value indicating that element's importance.
In the framework of artificial simulatability studies, human participants are replaced by explanation-aware machine learning models which will be referred to as _students_. In this analogy, the _teacher_ is represented by the dataset of input graphs and target value annotations, as well as the explanations whose quality is to be determined. Figure 1 illustrates the concept of such a _student-teacher analysis_ of explanation quality. The set \(\mathbb{X}\) of input data consists of tuples \((G,\mathbf{H}^{(0)},\mathbf{U}^{(0)})\) of graphs and their features. The set \(\mathbb{Y}\) consists of tuples \((\mathbf{y},\mathbf{V},\mathbf{E})\) of target value annotations, as well as node and edge attributional explanations. A student is defined as a parametric model \(\mathcal{S}_{\theta}:(G,\mathbf{H}^{(0)},\mathbf{U}^{(0)})\rightarrow(\mathbf{ y},\mathbf{V},\mathbf{E})\) with the trainable model parameters \(\boldsymbol{\theta}\). This firstly implies that every student model has to directly output explanations alongside each prediction. Moreover, these generated explanations have to be actively _supervisable_ to qualify as an explanation-aware student model.
During a single iteration of the student-teacher analysis, the sets of input and corresponding output data are split into a training set \(\mathbb{X}^{\text{train}},\mathbb{Y}^{\text{train}}\) and an unseen test set \(\mathbb{X}^{\text{test}},\mathbb{Y}^{\text{test}}\) respectively. Furthermore, two architecturally identical student models are initialized with the same initial model parameters \(\boldsymbol{\theta}\): The reference student model \(\mathcal{S}^{\text{ref}}_{\theta}\) and the explanation-aware student model \(\mathcal{S}^{\text{exp}}_{\theta}\). During the subsequent training phase, the reference student only gets to train on the main target value annotations \(\mathbf{y}\), while the explanation student is additionally trained on the given explanations. After the two students were trained on the same training elements and the same hyperparameters, their final prediction performance is evaluated on the unseen test data. If the explanation student outperforms the reference student on the final evaluation, we can assume that the given explanations contain additional task-related information and can thus be considered useful in this context.
However, the training of complex models, such as neural networks, is a stochastic process that generally only converges to a local optimum. For this reason, a single execution of the previously described process is not sufficient to assess a possible performance difference. Rather, a repeated execution is required to confirm the statistical significance of any result. Therefore, we define the student-teacher analysis as the \(R\) repetitions of the previously described process, resulting in the two vectors of test set evaluation performances \(\mathbf{p}^{\text{ref}},\mathbf{p}^{\text{exp}}\in\mathbb{R}^{R}\) for the two student models respectively. The concrete type of metric used to determine the final performance may differ, as is the case with classification and regression problems for example. Based on this definition we define the _student-teacher simulatability_ metric
\[\text{STS}_{R}=\text{median}(\mathbf{p}^{\text{exp}}-\mathbf{p}^{\text{ref}})\]
as the median of the pairwise performance differences between all the individual explanation students' and reference students' evaluation results. We choose
the median here instead of the arithmetic mean, due to its robustness towards outliers, which may occur when models sporadically fail to properly converge in certain iterations of the procedure.
In addition to the calculation of the STS metric, a paired t-test is performed to assure the statistical significance of the results. Only if the p-value of this test is below a 5% significance level should the analysis results be considered meaningful.
## 4 Computational Experiments
### Ablation Study for a Synthetic Graph Classification Dataset
We first conduct an ablation study on a specifically designed synthetic graph dataset to show the special conditions under which a performance benefit for the explanation student can be observed.
We call the synthetic dataset created for this purpose _red and blue adversarial motifs_ and a visualization of it can be seen in Figure 2. The dataset consists of 5000 randomly generated graphs where each node is associated with 3 node features representing an RGB color code. Each graph is seeded with one primarily red motif: Half of the elements are seeded with the red and yellow star motif and are consequently labeled as the "active" class. The other half of the elements are seeded with a red and green ring motif and labeled as "inactive". The dataset represents a binary classification problem where each graph will have to be classified as either active or inactive. As each class assignment is entirely based on the existence of the corresponding sub-graph motifs, these motifs are considered the perfect ground truth explanations for that dataset. In addition to the primarily red motifs, each graph is also seeded with one primarily blue motif: Either a blue-yellow ring motif or a blue-green star motif. These blue motifs are seeded such that their distribution is completely uncorrelated with the true class label of the elements. Thus, these motifs are considered deterministically incorrect/adversarial explanations w.r.t. the main classification task.
Student Model Implementations.We conduct an experiment to assess the suitability of different student model implementations. As previously explained, a student model has to possess two main properties: Node and edge explanations have to be generated alongside each prediction and more importantly it has to be possible to train the models based on these explanations in a supervised manner. To the best of our knowledge, there exist two methods from literature, which do this for _attributional_ explanations: The GNES framework of Gao _et al._[8] and the MEGAN architecture of Teufel _et al._[26]. We conduct an experiment with \(R=25\) repetitions of the student-teacher analysis for three different models: A lightweight MEGAN model, GNES explanations based on a simple GCN network, and GNES explanations based on a simple GATv2 network. In each iteration, 100 elements of the dataset are used to train the student model while the rest is used during testing. Table 1 shows the results of this experiment. We
report the final STS value, as well as the node and edge AUC metrics, which indicate how well the explanations of the corresponding models match the ground truth explanations of the test set.
Since the perfect ground truth explanations are used for this experiment, we expect the explanation student to have the maximum possible advantage w.r.t to the explanations. The results show that only the MEGAN student indicates a statistically significant STS value of a median 12% accuracy improvement for the explanation-aware student. The GNES experiments on the other hand do not show statistically significant performance benefits. We believe that this is due to the limited effect of the explanation supervision that can be observed in these cases: While the node and edge accuracy of the GNES explanation student only improves by a few percent, the MEGAN explanation student almost perfectly learns the ground truth explanations. This is consistent with the results
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Student Model & STS\({}_{25}\uparrow\) & \multicolumn{2}{c}{Node AUC \(\uparrow\)} & \multicolumn{2}{c}{Edge AUC \(\uparrow\)} \\ & & Ref & Exp & Ref & Exp \\ \hline GNES\({}_{\text{GCN}}\) & 0.02 & 0.55\(\pm\)0.04 & 0.59\(\pm\)0.03 & 0.64\(\pm\)0.04 & 0.66\(\pm\)0.04 \\ GNES\({}_{\text{GATv2}}\) & 0.01 & 0.59\(\pm\)0.05 & 0.61\(\pm\)0.05 & 0.51\(\pm\)0.05 & 0.55\(\pm\)0.04 \\ MEGAN\({}_{0.0}^{2}\) & **0.12\({}^{(*)}\)** & 0.64\(\pm\)0.15 & **0.94\(\pm\)**0.01 & 0.66\(\pm\)0.14 & **0.96\(\pm\)**0.02 \\ \hline \hline \end{tabular} \({}^{(*)}\) Statistically significant according to a paired T-test with \(p<5\%\)
\end{table}
Table 1: Results for 25 repetitions of the student-teacher analysis for different reference models (Ref) and explanation supervised student model (Exp) implementations.
Figure 2: Synthetic dataset used to quantify the usefulness of attributional graph explanations, incl. testing the robustness toward adversarial explanations.
reported by Teufel _et al._[26], who report that MEGAN outperforms the GNES approach in capability for explanation supervision. A possible explanation for why that is the case might be that the explanation-supervised training of the already gradient-based explanations of GNES relies on a second derivative of the network, which might provide a generally weaker influence on the network's weights.
Based on this result, we only investigate the MEGAN student in subsequent experiments.
#### 4.2.2 Training Dataset Size Sweep.
In this experiment, we investigate the influence of the training dataset size on the explanation performance benefit. For this purpose, we conduct several student-teacher analyses with \(R=25\) repetitions using the MEGAN student architecture. We vary the number of elements used for training between 100, 200, 300, 400, and 500 elements out of a total of 5000. In each iteration, the training dataset with that number of elements is randomly sampled from the entire dataset and the rest is used during testing. Figure 3 shows the results of this experiment. We visualize the performance distributions of explanation and reference students for each dataset size and provide the STS metric in each case.
The results show the greatest performance benefit for the smallest training set size of just 100 elements. Afterward, the STS value converges to 0 for 500 elements, losing statistical significance as well. We believe that this is caused by
Figure 3: Results of student-teacher analyses (\(R=25\)) for different training dataset sizes. Each column shows the performance distribution for the reference student (blue) and the explanation student (green) of the student-teacher procedure. The number above each column is the resulting STS value. (*) indicates statistical significance according to a paired T-test with \(p<5\%\)
the convergence of _both_ students to the near-perfect performance of approx. 98% accuracy. In other words: A larger train set size represents a smaller difficulty for the student models. With decreasing difficulty, the students can solve the task almost perfectly by themselves, diminishing any possible benefit of the explanations. We can therefore formulate the rule of thumb that explanations have the potential to provide the greatest benefit when tasks are _more difficult_, and cannot be so easily solved without explanations. As shown in this experiment, a reduction of the train set size sufficiently provides such an increase in difficulty. Based on this result, we conduct subsequent experiments with a training set size of 100 to observe the most pronounced effect.
**Explanation Noise Sweep.** For the majority of real-world tasks, perfect ground truth explanations are generally not available. Instead, explanations can be generated through a multitude of XAI methods that have been proposed in recent years. Since complex machine learning models and XAI methods generally only find local optima, it is reasonable to assume that generated explanations are not perfect but rather contain some amount of noise as well. The question is how such explanation noise affects the results of our proposed student-teacher analysis. In this experiment, we perform different student-teacher analyses, where in each case the explanations are overlaid with a certain ratio \(P\%\) of random noise, where \(P\in\{0,5,10,20,40,60,80,100\}\). A ratio \(P\%\) means that the explanation importance value for every element (nodes and edges) in every graph has a \(P\%\) chance of being randomly sampled instead of the ground truth value being used.
Figure 4: Results of student-teacher analyses (\(R=25\)) for explanations with different ratios of additional explanation noise. Each column shows the performance distribution for the reference student (blue) and the explanation student (green) of the student-teacher procedure. The number above each column is the resulting STS value. (*) indicates statistical significance according to a paired T-test with \(p<5\%\)
Each student-teacher analysis is conducted with a MEGAN student architecture and 100 training data points. Figure 3 shows the results of this experiment.
The results show that there is a statistically significant performance benefit for the explanation student until 40% explanation noise is reached. Afterward, the STS value converges towards zero and loses statistical significance as well. One important aspect to note is that even for high ratios of explanation noise the performance difference converges toward zero. This indicates that explanations consisting almost entirely of _random noise_ do not benefit the performance of a student model, but they do _not negatively influence_ it either. We believe this is the case because random explanations do not cause any learning effect for the model. In our setup of explanation-supervised training, actual explanation labels are not accessible to either student during the testing phase, instead, the models have to learn to replicate the given explanations during training through their own internal explanation-generating mechanisms. Only through these learned replications can any potential advantage or disadvantage be experienced by the models during performance evaluation. Completely random explanations cannot be learned by the models and consequently have no effect during performance evaluation.
#### 4.2.2 Adversarial Explanation Sweep.
The previous experiment indicates that purely random explanations do not negatively affect the model performance. By contrast, it could be expected that deterministic incorrect explanations on
Figure 5: Results of student-teacher analyses (\(R=25\)) for datasets containing different amounts of adversarial incorrect explanations. Each column shows the performance distribution for the reference student (blue) and the explanation student (green) of the student-teacher procedure. The number above each column is the resulting STS value. (*) indicates statistical significance according to a paired T-test with \(p<5\%\)
the other hand should have a negative influence on the performance. The used dataset is seeded with two families of sub-graph motifs (see Figure 2): The red-based motifs are completely correlated with the two target classes and can thus be considered the perfect explanations for the classification task. The blue-based motifs on the other hand are completely uncorrelated to the task and can thus be considered _incorrect/adversarial_ explanations w.r.t. to the target labels. In this experiment, increasing amounts of these adversarial explanations are used to substitute the true explanations during the student-teacher analysis to investigate the effect of incorrect explanations on the performance difference. In each iteration, \(Q\%\) of the true explanations are replaced by adversarial explanations, where \(Q\in\{0,5,10,20,40,60,80,100\}\). Each student-teacher analysis is conducted with a MEGAN student architecture and 100 training elements.
The results in Figure 5 show that a statistically significant explanation performance benefit remains for ratios of adversarial explanations for up to 20%. For increasingly large ratios, the STS value still remains positive although the statistical significance is lost. For ratios of 80% and above, statistically significant _negative_ STS values can be observed. This implies that incorrect explanations negatively influence the performance of the explanation-aware student model.
Figure 6: Results of student-teacher analyses (\(R=25\)) for different layer structures of the MEGAN student model. The square brackets indicate the number of hidden units in each layer of the main convolutional part of the network. The normal brackets beneath indicate the number of hidden units in the fully connected layers in the tail-end of the network. Each column shows the performance distribution for the reference student (blue) and the explanation student (green) of the student-teacher procedure. The number above each column is the resulting STS value. (*) indicates statistical significance according to a paired T-test with \(p<5\%\)
#### 3.3.2 Student Network Layer Structure.
In this experiment, we investigate the influence of the concrete student network layout on the explanation performance benefit. For this purpose, we conduct several student-teacher analyses with \(R=25\) repetitions using the MEGAN student architecture. We vary the number of convolutional and fully-connected layers, as well as the number of hidden units in these layers. Starting with a simple two-layer 3-unit network layout, the number of model parameters, and thus its complexity is gradually increased until the most complex case of a three-layer 20-unit network is reached. Figure 3 shows the results of this experiment. We visualize the performance distributions of explanation and reference students for each dataset size and provide the STS metric in each case.
The results show that the students' prediction performance generally improves for more complex models. However, this is true for the explanation as well as the reference student. While there still is a statistically significant effect for the most complex network layout, it is very marginal because the reference student achieves almost perfect accuracy in these cases as well. On the other hand, the most simple student network layout shows the largest performance benefit. However, for the simple network layouts, the standard variation of the performance over the various repetitions is greatly increased for reference and explanation students, but seemingly more so for the explanation student. We generally conclude that both extreme cases of simplistic and complex student network architectures have disadvantages w.r.t. to revealing a possible explanation performance benefit. In the end, the best choice is a trade-off between variance in performance and overall capability.
#### 3.3.3 Node versus Edge Explanations.
We conduct an experiment to determine the relative impact of the node and edge explanations individually. We conduct a student-teacher analysis with \(R=25\) repetitions. We use a simple three-layer MEGAN student, where each iteration uses 100 randomly chosen training samples. We investigate three cases: As a baseline case, the explanation student uses ground truth node and edge explanations during explanation-supervised training. In another case, the explanation student is only supplied with the node attributional explanations. In the last case, only the edge attributional explanations are used. This is achieved by setting the corresponding weighting factors to 0 during training. Table 2 shows the results of this experiment. We report the final STS value, as well as the node and edge AUC metrics, which indicate how well the explanations of the corresponding models match the ground truth explanations of the test set.
The results show that all three cases achieve statistically significant STS values indicating a performance benefit of the given explanations. Furthermore, in all three cases, the explanations learned by the explanation student show high similarity (AUC \(>0.9\)) to the ground truth explanations for node _as well as_ edge attributions. This implies that the student model is able to infer the correspond
ing explanation edges for the ground truth explanatory motifs, even if it is only trained on the nodes, and vice versa. We believe the extent of this property is a consequence of the used MEGAN student architecture. The MEGAN network architecture implements an explicit architectural co-dependency of node and edge explanations to promote the creation of connected explanatory sub-graphs. These results imply that it may be possible to also apply the student-teacher analysis in situations where only node or edge explanations are available.
### Real-World Datasets
In addition to the experiments on the synthetic dataset, we aim to provide a validation of the student-teacher analysis' effectiveness on real-world datasets as well. For this purpose, we choose one graph classification and one graph regression dataset from the application domain of chemistry. We show how the student-teacher analysis can be used to quantify _usefulness_ of the various kinds of explanations for these datasets.
#### 4.2.1 Mutagenicity - Graph Classification
To demonstrate the student-teacher analysis of GNN-generated explanations on a real-world graph classification task, we choose the Mutagenicity dataset [9] as the starting point. By its nature of being real-world data, this dataset does not have ground truth explanations as it is, making it hard to compare GNN-generated explanations to the ground truth. However, the dataset can be transformed into a dataset with ground truth explanatory subgraph motifs. It is hypothesized that the nitro group (NO\({}_{2}\)) is one of the main reasons for the property of mutagenicity [15, 17]. Following the procedure previously proposed by Tan _et al._[25], we extract a subset of elements containing all molecules which are labeled as mutagenic and contain the benzene-NO\({}_{2}\) group as well as all the elements that are labeled as non-mutagenic and do not contain that group. Consequently, for the resulting mutagenicity subset, the benzene-NO\({}_{2}\) group can be considered as the definitive ground truth explanation for the mutagenic class label. We call the resulting dataset _MutagenicityExp_. It
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Explanations & \(STS_{25}\uparrow\) & \multicolumn{2}{c}{Node AUC \(\uparrow\)} & \multicolumn{2}{c}{Edge AUC \(\uparrow\)} \\ & & Ref & Exp & Ref & Exp \\ \hline Both & 0.12\({}^{(*)}\) & 0.62\({}_{\pm 0.14}\) & 0.95\({}_{\pm 0.03}\) & 0.62\({}_{\pm 0.16}\) & 0.94\({}_{\pm 0.03}\) \\ Nodes & 0.12\({}^{(*)}\) & 0.65\({}_{\pm 0.13}\) & 0.93\({}_{\pm 0.03}\) & 0.65\({}_{\pm 0.12}\) & 0.92\({}_{\pm 0.04}\) \\ Edges & 0.10\({}^{(*)}\) & 0.67\({}_{\pm 0.15}\) & 0.93\({}_{\pm 0.03}\) & 0.67\({}_{\pm 0.12}\) & 0.94\({}_{\pm 0.03}\) \\ \hline \hline \end{tabular} \({}^{(*)}\) Statistically significant according to a paired T-test with \(p<5\%\)
\end{table}
Table 2: Results for 25 repetitions of the student-teacher Analysis conducted with either only node explanations, only edge explanations, or both.
consists of roughly 3500 molecular graphs, where about 700 are labeled as mutagenic. Furthermore, we designate 500 random elements as the test set, which are sampled to achieve a balanced label distribution.
Based on this dataset, we train GNN models to solve the classification problem. Additionally, we use multiple different XAI methods to generate attributional explanations for the predictions of those GNNs on the previously mentioned test set of 500 elements. These explanations, generated by the various XAI methods, are then subjected to student-teacher analysis, along with some baseline explanations. The results of an analysis with 25 repetitions can be found in Table 3. The hyperparameters of the student-teacher analysis have been chosen through a brief manual search. We use the same basic three-layer MEGAN student architecture as with the synthetic experiments. In each repetition, 10 random elements are used to train the students, and the remainder is used to assess the final test performance. Each training process employs a batch size of 10, 150 epochs, and a 0.01 learning rate. The student-teacher analysis is performed solely on the previously mentioned 500-element test set, which remained unseen to any of the trained GNN models.
As expected, the results show that the reference random explanations do not produce a statistically significant STS result. These explanations are included as a baseline sanity check because previous experiments on the synthetic dataset imply that purely random explanation noise should not have any statistically significant effect on the performance in either direction. The benzene-NO\({}_{2}\) ground truth explanations on the other hand show the largest statistically significant STS value of a median 13% accuracy improvement, as well as the largest explanation accuracy of the explanation student models. GNNexpl
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Explanations by & STS\({}_{25}\uparrow\) & \multicolumn{2}{c}{Node AUC \(\uparrow\)} & \multicolumn{2}{c}{Edge AUC \(\uparrow\)} \\ & & Ref & Exp & Ref & Exp \\ \hline Ground Truth & \(\mathbf{0.13^{(*)}}\) & \(0.42\)\(\pm\)\(0.05\) & \(\mathbf{0.97}\)\(\pm\)\(0.05\) & \(0.41\)\(\pm\)\(0.05\) & \(\mathbf{0.96}\)\(\pm\)\(0.04\) \\ GNNExplainer & \(0.09^{(*)}\) & \(0.50\)\(\pm\)\(0.09\) & \(0.69\)\(\pm\)\(0.05\) & \(0.50\)\(\pm\)\(0.11\) & \(0.71\)\(\pm\)\(0.04\) \\ Gradient & \(0.07^{(*)}\) & \(0.54\)\(\pm\)\(0.18\) & \(0.84\)\(\pm\)\(0.06\) & \(0.46\)\(\pm\)\(0.17\) & \(0.67\)\(\pm\)\(0.10\) \\ MEGAN\({}_{1.0}^{2}\) & \(0.12^{(*)}\) & \(0.55\)\(\pm\)\(0.15\) & \(0.91\)\(\pm\)\(0.01\) & \(0.55\)\(\pm\)\(0.14\) & \(0.92\)\(\pm\)\(0.02\) \\ Random & \(0.01\) & \(0.50\)\(\pm\)\(0.04\) & \(0.50\)\(\pm\)\(0.03\) & \(0.50\)\(\pm\)\(0.04\) & \(0.50\)\(\pm\)\(0.04\) \\ \hline \hline \end{tabular} \({}^{(*)}\) Statistically significant according to a paired T-test with \(p<5\%\)
\end{table}
Table 3: Results for 25 repetitions of the student-teacher analysis for different explanations on the MutagenicityExp dataset. We mark the best result in bold and underline the second best.
explanations also show statistically significant STS values of 9% and 7% median accuracy improvement respectively. The MEGAN-generated explanations show the overall second-best results with an STS value just slightly below the ground truth.
We hypothesize that high values of explanation accuracy are a necessary but not sufficient condition for high STS results. A higher learned explanation accuracy indicates that the explanations are generally based on a more consistent set of underlying rules and can consequently be replicated more easily by the student network, which is the basic prerequisite to show any kind of effect during the student evaluation phase. This is a necessary but not sufficient condition because as shown in the previous adversarial explanation experiment, explanations can be highly deterministic yet conceptionally incorrect and thus harmful to model performance.
#### 4.2.2 **AqSolDB - Graph Regression**
The AqSolDB [24] dataset consists of roughly 10000 molecular graphs annotated with experimentally determined logS values for their corresponding solubility in water. Of these, we designate 1000 random elements as the test set.
For the concept of water solubility, there exist no definitive attributional explanations. However, there exists some approximate intuition as to what molecular structures should result in higher/lower solubility values: In a simplified manner, one can say that non-polar substructures such as carbon rings and long carbon chains generally result in lower solubility values, while polar structures such as certain nitrogen and oxygen groups are associated with higher solubility values.
Based on this dataset, we train a large MEGAN model on the training split of the elements to regress the water solubility and then generate the dual-channel attributional explanations for the previously mentioned 1000-element test split. For this experiment, we only use a MEGAN model as it is the only XAI method able to create dual-channel explanations for single value graph regression tasks [26]. These dual-channel explanations take the previously mentioned _polarity of evidence_ into account, where some substructures have an opposing influence on the solubility value. The first explanation channel contains all negatively influencing sub-graph motifs, while the second channel contains the positively influencing motifs. In addition to the MEGAN-generated explanations, we provide two baseline explanation types. Random explanations consist of randomly generated binary node and edge masks with the same shape. Trivial explanations represent the most simple implementation of the previously introduced human intuition about water solubility: The first channel contains all carbon atoms as explanations and the second channel contains all oxygen and nitrogen atoms as explanations.
The hyperparameters of the student-teacher analysis have been chosen through a brief manual search. We use the same basic three-layer MEGAN student architecture as with the synthetic experiments. In each repetition, 300 random elements are used to train the students, and the remainder is used to assess
the final test performance. Each training process employs a batch size of 32, 150 epochs, and a 0.01 learning rate. The student-teacher analysis is performed solely on the previously mentioned 1000-element test set, which remained unseen to the predictive model during training.
The results show that neither the random nor the trivial explanations result in any significant performance improvement. The MEGAN-generated explanations on the other hand result in a significant improvement of a median 0.23 for the final prediction MSE. This implies that the MEGAN-generated explanations do in fact encode additional task-related information, which goes beyond the most trivial intuition about the task. However, a possible pitfall w.r.t. to this conclusion needs to be pointed out: The MEGAN-generated explanations are evaluated by a MEGAN-based student architecture. It could be that the effect is so strong because these explanations are especially well suited to that kind of architecture, as they were generated through the same architecture. We believe that previous experiments involving architecture-independent ground truth explanations have weakened this argument to an extent. Still, it will be prudent to compare these results with explanations of a different origin in the future, such as the explanations of human experts.
## 5 Limitations
We propose the student-teacher analysis as a means to measure the content of _useful_ task-related information contained within a set of attributional graph explanations. This methodology is inspired by human simulatability studies but with the decisive advantages of being vastly more time- and cost-efficient as well as being more reproducible. However, there are currently also some limitations to the applicability of this approach. Firstly, the approach is currently limited
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Model & \(STS_{25}\uparrow\) & \multicolumn{2}{c}{Node AUC \(\uparrow\)} & \multicolumn{2}{c}{Edge AUC \(\uparrow\)} \\ & & Ref & Exp & Ref & Exp \\ \hline Random & 0.00 & 0.50\({}_{\pm 0.04}\) & 0.50\({}_{\pm 0.03}\) & 0.50\({}_{\pm 0.04}\) & 0.50\({}_{\pm 0.04}\) \\ Trivial & 0.03 & 0.40\({}_{\pm 0.05}\) & **0.99\({}_{\pm 0.05}\)** & 0.42\({}_{\pm 0.05}\) & **0.99\({}_{\pm 0.04}\)** \\ MEGAN\({}_{1.0}^{2}\) & **0.23\({}^{(*)}\)** & 0.55\({}_{\pm 0.15}\) & 0.90\({}_{\pm 0.01}\) & 0.55\({}_{\pm 0.14}\) & 0.89\({}_{\pm 0.02}\) \\ \hline \hline \end{tabular} \({}^{(*)}\) Statistically significant according to a paired T-test with \(p<5\%\)
\end{table}
Table 4: Results for 25 repetitions of the student-teacher analysis for different explanations on the AqSolDB dataset. We highlight the best result in bold and underline the second best.
to attributional explanations, which assign a 0 to 1 importance value to each element. These kinds of explanations have been found to have issues [12, 1] and recently many different kinds of explanations have been proposed. Some examples are _counterfactuals_[20], _concept-based_ explanations [19], and _prototype-based_ explanations [23].
Another limitation is that the student-teacher analysis process itself depends on a lot of parameters. As we show in previous sections, the size of the training dataset and the specific student architectures have an impact on how pronounced the effect can be observed. For these reasons, the proposed STS metric cannot be used as an absolute measure of quality such as accuracy for example. Rather, it can be used to relatively _compare_ different sets of explanations under the condition that all experiments are conducted with the same parameters. We propose certain rules of thumb for the selection of these parameters, however, it may still be necessary to conduct a cursory parameter search for each specific application. Despite these limitations, we believe that artificial simulatability studies, as proposed in this work, are still an important step toward better practices for the evaluation of explainable AI methods. The currently most widespread metric of explanation quality is the concept of explanation _faithfulness_, which only measures how decisive an explanation is for a model's prediction. We argue, that the concept of artificial simulatability is a first step towards a measure of how intrinsically _useful_ explanation can be for the _communication_ of additional task-related information.
## 6 Conclusion
In this work, we extend the concept of artificial simulatability studies to the application domain of graph classification and regression tasks. We propose the student-teacher analysis and the _student-teacher simulatability_ (STS) metric to quantify the content of intrinsically _useful_ task-related information for a given set of node and edge attributional explanations. We conduct an ablation study on a synthetic dataset to investigate the conditions under which an explanation benefit can be observed most clearly and propose several rules of thumb for an initial choice of experimental parameters: Analysis requires a sufficient number of repetitions for statistical significance, a small number of training elements and a light-weight layer structure for the student model. Furthermore, we show evidence that the analysis method is robust towards small amounts of explanation noise and adversarial explanations. Interestingly, random explanation noise merely suppresses any explanation benefit while deterministically incorrect explanations cause significant performance degradation. This indicates that the method cannot only be used to identify good explanations but also to detect actively harmful ones. Furthermore, we can validate the applicability of our proposed analysis for several real-world datasets of molecular classification and regression.
We believe that artificial simulatability studies can provide a valuable additional tool for the evaluation of graph explanations. The student-teacher analysis mea
sures the _usefulness_ of explanations in communicating task-related knowledge, which can be seen as a complementary dimension to the current widespread practice of measuring explanation faithfulness.
For future work, it will be interesting to extend this process to other kinds of graph explanations that have recently emerged such as concept-based explanations or prototype-based explanations. Since this is a method of measuring the content of task-related information within explanations, another application may be in educational science. The method could be used to assess explanation annotations created by human students to provide quantitative feedback on their understanding of a given graph-related problem. Another line of future work is demonstrated by Fernandes _et al._[7] which uses the differentiable nature of Pruthi _et al._'s [21] original artificial simulatability procedure itself in a meta-optimization process that attempts to optimize an explanation generator for this property of explanation usefulness.
## 7 Reproducibility Statement
We make our experimental code publically available at [https://github.com/aimat-lab/gnn_student_teacher](https://github.com/aimat-lab/gnn_student_teacher). The code is implemented in the Python 3.9 programming language. Our neural networks are built with the KGCNN library by Reiser _et al._[22], which provides a framework for graph neural network implementations with TensorFlow and Keras. We make all data used in our experiments publically available on a file share provider [https://bwsyncandshare.kit.edu/s/E3MynrfQsLAHzJC](https://bwsyncandshare.kit.edu/s/E3MynrfQsLAHzJC). The datasets can be loaded, processed, and visualized with the visual graph datasets package [https://github.com/aimat-lab/visual_graph_datasets](https://github.com/aimat-lab/visual_graph_datasets). All experiments were performed on a system with the following specifications: Ubuntu 22.04 operating system, Ryzen 9 5900 processor, RTX 2060 graphics card and 80GB of memory. We have aimed to package the various experiments as independent modules and our code repository contains a brief explanation of how these can be executed. |
2310.00664 | Twin Neural Network Improved k-Nearest Neighbor Regression | Twin neural network regression is trained to predict differences between
regression targets rather than the targets themselves. A solution to the
original regression problem can be obtained by ensembling predicted differences
between the targets of an unknown data point and multiple known anchor data
points. Choosing the anchors to be the nearest neighbors of the unknown data
point leads to a neural network-based improvement of k-nearest neighbor
regression. This algorithm is shown to outperform both neural networks and
k-nearest neighbor regression on small to medium-sized data sets. | Sebastian J. Wetzel | 2023-10-01T13:20:49Z | http://arxiv.org/abs/2310.00664v1 | # Twin Neural Network Improved k-Nearest Neighbor Regression
###### Abstract
Twin neural network regression is trained to predict differences between regression targets rather than the targets themselves. A solution to the original regression problem can be obtained by ensembling predicted differences between the targets of an unknown data point and multiple known anchor data points. Choosing the anchors to be the nearest neighbors of the unknown data point leads to a neural network-based improvement of k-nearest neighbor regression. This algorithm is shown to outperform both neural networks and k-nearest neighbor regression on small to medium-sized data sets.
_Keywords_: Artificial Neural Networks, k-Nearest Neighbors, Regression |
2304.02852 | Classification of Skin Disease Using Transfer Learning in Convolutional
Neural Networks | Automatic classification of skin disease plays an important role in
healthcare especially in dermatology. Dermatologists can determine different
skin diseases with the help of an android device and with the use of Artificial
Intelligence. Deep learning requires a lot of time to train due to the number
of sequential layers and input data involved. Powerful computer involving a
Graphic Processing Unit is an ideal approach to the training process due to its
parallel processing capability. This study gathered images of 7 types of skin
disease prevalent in the Philippines for a skin disease classification system.
There are 3400 images composed of different skin diseases like chicken pox,
acne, eczema, Pityriasis rosea, psoriasis, Tinea corporis and vitiligo that was
used for training and testing of different convolutional network models. This
study used transfer learning to skin disease classification using pre-trained
weights from different convolutional neural network models such as VGG16,
VGG19, MobileNet, ResNet50, InceptionV3, Inception-ResNetV2, Xception,
DenseNet121, DenseNet169, DenseNet201 and NASNet mobile. The MobileNet model
achieved the highest accuracy, 94.1% and the VGG16 model achieved the lowest
accuracy, 44.1%. | Jessica S. Velasco, Jomer V. Catipon, Edmund G. Monilar, Villamor M. Amon, Glenn C. Virrey, Lean Karlo S. Tolentino | 2023-04-06T04:13:54Z | http://arxiv.org/abs/2304.02852v1 | # International Journal of Emerging Technology and Advanced Engineering
###### Abstract
Automatic classification of skin disease plays an important role in healthcare especially in dermatology. Dermatologists can determine different skin diseases with the help of an android device and with the use of Artificial Intelligence. Deep learning requires a lot of time to train due to the number of sequential layers and input data involved. Powerful computer involving a Graphic Processing Unit is an ideal approach to the training process due to its parallel processing capability. This study gathered images of 7 types of skin disease prevalent in the Philippines for a skin disease classification system. There are 3400 images composed of different skin diseases like chicken pos, acne, eczema, Pityriasis rosea, psoriasis, Tinea corporis and utiligo that was used for training and testing of different convolutional network models. This study used transfer learning to skin disease classification using pre-trained weights from different convolutional neural network models such as VGG16, VGG19, MobileNet, ResNet50, InceptionV3, Inception-ResNetV2, Xception, DenseNet121, DenseNet169, DenseNet201 and NASNet mobile. The MobileNet model achieved the highest accuracy, 94.1% and the VGG16 model achieved the lowest accuracy, 44.1%.
Skin Disease Classification, Deep Learning, Convolutional Neural Networks, Transfer Learning, Python
## I Introduction
Skin diseases are defined as conditions that typically develop inside the body or on the skin and manifest outside. There are 3000 types known skin disease [1]. Some conditions are uncommon while some occurs commonly. Generally, this condition brings rich, pain, and sleep deprivation. Other effects of skin diseases include emotional and social impact due to its detectable visual sensation. However, dermatologists assures that majority of skin diseases can be controlled with proper medication and properly diagnosed.
An implementation of an accurate and precise automated skin disease detection application that can be used by dermatologists can help in reducing their job.
Big Data refers to gathering and processing dataset sets to the level where its size and complexity transcends the capacity of conventional data processing applications. It is distinguished by 5Vs: (1) huge volume of data, (2) wide variety of data types, (3) velocity of data processing, (4) variability of data, and (5) value of data [2]. Some of the repositories available online includes molecular, clinical, and epidemiology data. This provides a vast possibility for research opportunities for different scientific advancements [3]. By combining the use of big data, image recognition technology, and the field of dermatology, patients, dermatologists, and the research community might reap a great benefit. This is due to many skin diseases that be diagnosed by medical professionals by inspecting it with naked eye. The different visual feature of each condition makes them easy to diagnose with the use of artificial intelligence and deep learning technologies. Moreover, skin diseases that are common in the Philippines can be easily identified with the of image recognition technologies. These skin diseases include chicken pos, acne, eczema, pityriasis rosea, psoriasis, tinea corporis, and utiligo.
Previously, MobileNet was only used as the model in skin disease classification [4]. In this paper, additional learning models are implemented such has VGG16, VGG19, Xception, ResNet50, InceptionV3, InceptionResNetV2, DenseNet121, DenseNet169, DenseNet201, and NASNet Mobile. They were used to classify skin diseases with the use of images gathered from websites that are professional and open to public use like photo atlas of dermatology. They were tested if they will outperform the previously implemented MobileNet.
## II Conceptual Literature
### _Transfer Learning_
Human learners have the ability to naturally transfer their knowledge between one task to another. In other words, when faced with new challenges, people can recognize and use the pertinent information from past experiences. The ease of learning a new task depends on how resembles our previous knowledge. Contrarily, typical machine learning algorithms focus on small tasks. Transfer learning aims to change this b y creating strategies to use knowledge acquired in one or more source activities and apply it to enhance learning in a related target activity. To make machine learning as effective as human learning, knowledge transfer techniques are being pushed to advance [5].
### _Keras Platform_
A Fully Convolutional Network (FCN) was implemented, designed and developed using Keras, Python, and Theano in the research "Fully convolutional networks for segmenting pictures from an embedded camera" [6]. The FCN is used in this research to perform basic computer vision operations on images from a robot-mounted small stereo imaging sensor.
The network's design was prototyped using the Keras library, which accelerated the search for a network with high accuracy and minimal computing resource usage. The dataset of images is modified to fit the stereo camera imaging acquisition presets for the robot. It was also used for the training and validation of the proposed network.
### _Inception V3_
The Inception-v3 model of the Tensor Flow platform was used by the researchers in the study "Inception-v3 for flower classification" [7] to categorize flowers. The flower category dataset was retrained using transfer learning technology, which can significantly increase flower classification accuracy. In comparison to previous methods, the model's classification accuracy was 95% for the Oxford-17 flower dataset and 94% for the Oxford-102 flower dataset.
### _MobileNet_
Researchers utilized a Convolutional Neural Network model called MobileNet in the study "Driver distraction detection using single convolutional neural network" [8] to identify driver distraction.
The findings for MobileNet accuracy are seen to be higher compared to the Inception ResNet. Moreover, system results vary widely depending on how quickly CPU/GPU processing time is.
### _Inception-ResNet-V2_
The researchers introduced a brand-new family of modules called the PolyInception in their paper "PolyNet: A Pursuit of Structural Diversity in Very Deep Networks" [9]. It can replace various network components in a composition or isolated form with flexibility. Architectural efficiency can be used to choose PolyInception modules to increase expressive capability while maintaining a similar computational cost.
Based on the Inception-ResNet-v2 has the highest documented single model accuracy on ImageNet. Inception blocks are utilized to capture the residuals in the most recent version of the residual structure, which combines the two. The building blocks of GoogleNet are inception blocks. Their structures have undergone multiple iterations of optimization and refinement.
### _Vgg-16_
Different CNNs have been introduced with various architectural designs. With lower convolutional size and strides, the VGG-16 consists of 16 layers (13 convolutional layers and 3 fully linked layers). 4096 channels are present in the first two fully connected layers, while 1000 channels are present in the third layer. With the exception of sampling the inputs from the cropped multi-scale training images, VGG 16 uses a nearly identical training process to AlexNet. Using a convolutional neural network, the marine industry can recognize visual objects [10].
### _Vgg-19_
The researchers suggested a method to help the blind by delivering contextual information of the surroundings using 360\({}^{\circ}\) view cameras mixed with deep learning in the study "360\({}^{\circ}\) view camera based visual assistive technology for contextual scene information" [11]. The feed gives the user contextual information in the form of audio. That is accomplished by utilizing CNN transfer learning properties with the pre-trained VGG-19 network to classify data using convolutional neural networks (CNN).
VGG-19 convolutional neural network is a 19-layers network. It is composed of convolutional layers, Maxpooling, fully connected layers, and an output Softmax layer.
## 3 Methodology
### _Dataset_
The photos required for the project's development are sourced from the www.dermweb.com photo atlas, notably www.dermnetnz.org, as well as many clinical dermatological photo atlas publications. Acne, Varicella (chickenpox), eczema, Pityriasis rosea, psoriasis, vitiligo, and Tinea corporis are examples of skin diseases depicted in Figure 1. The datasets were compiled using a combination of publicly accessible dermatological repositories, dermatology color picture atlases, and photographs acquired by hand. Dermatologists have approved them as the categorization of skin disorders.
The dataset comes from a combination of open-access dermatological website, color atlas of dermatology and taken manually. The dataset composed of 7 categories of skin diseases and each image is in.jpeg extension. There is a total of 3,406 images.
### _Experiment_
The system will be built on the Keras platform and will use Tensorflow as its backend. The Pycharm IDE will be used to develop the app. The method can detect skin problems such as acne, eczema, psoriasis, vitiligo, Tinea corporis, chicken pox, and Pityriasis rosea. This is accomplished through the use of convolutional neural network transfer learning models such as the VGG 16, VGG 19, Inception, Xception, ResNet50, DenseNet, and Mobilenet.
Figure 1: Sample Images of Dataset
Figure 2: Program flowchart of the Python Code
## International Journal of Emerging Technology and Advanced Engineering
**Website: www.jetae.com (E-ISSN 2250-2459, Scopus Indexed, ISO 9001:2008 Certified Journal, Volume 13, Issue 04, April 2023)**
Referring to Figure 2, imports such as Numpy, Keras, Scikit-Learn, and Matplotlib are organized first by the application.
The dataset should then be configured into several directories to separate the training and testing data (training, testing and validation). The third step is to load photographs of skin conditions from category subfolders. Making a foundation model of various pretrained convolutional neural networks is the next step. Next, the data is preprocessed to get the features. To handle this automatically, Keras includes tools. The model's testing and training configuration comes next. The model is trained using the Adam optimizer. In order to determine which architecture is optimal for classifying skin diseases, various architectures will be assessed and compared based on model accuracy, confusion matrix, loading time, and weight size after training.
Using pretrained convolutional networks, size of the input image differs for each model. The input image is equal to the size of the image (width and height) and the number of channels. Table I shows the fixed size of the input image for each model.
## IV Results
The following criteria were looked at to compare and validate each pre-trained convolutional neural network's performance in classifying skin diseases: the confusion matrix, loading speed, accuracy, and weight size.
### _Confusion Matrices_
The confusion matrices of several models over the seven types of skin diseases are displayed in Figures 3-12. The row denotes a projected class, while the column denotes the actual class [13]. It is also known as a matching matrix. This demonstrates the commonality in misclassification across several convolutional neural networks.
Figure 5: InceptionResNetV2 Confusion Matrix
Figure 6: ResNet50 Confusion Matrix
Figure 7: VGG16 Confusion Matrix
Figure 10: DenseNet121 Confusion Matrix
Figure 9: VGG19 Confusion Matrix
Figure 11: DenseNet169 Confusion Matrix
Figure 12: NASNet Mobile Confusion Matrix
Figure 10: DenseNet121 Confusion Matrix
## V Conclusion
The MobileNet model outperforms the others with an accuracy of 94.1% and a weight size of 16.823MB. It offers the highest precision and the smallest weight size. VGG16 and VGG19, on the other hand, load pages faster than MobileNet, taking 3.543 and 3.809 seconds, respectively.
### _Acknowledgement_
The authors would like to express their acknowledgement for the assistance of the following: Jean Wilmar Alberio, Jonathan Apuang, John Stephen Cruz, Mark Angelo Gomez, Benjamin Molina Jr., and Lyndon Tuala, for their utmost contribution and effort on the completion of this study.
|
2305.11252 | Brain-inspired learning in artificial neural networks: a review | Artificial neural networks (ANNs) have emerged as an essential tool in
machine learning, achieving remarkable success across diverse domains,
including image and speech generation, game playing, and robotics. However,
there exist fundamental differences between ANNs' operating mechanisms and
those of the biological brain, particularly concerning learning processes. This
paper presents a comprehensive review of current brain-inspired learning
representations in artificial neural networks. We investigate the integration
of more biologically plausible mechanisms, such as synaptic plasticity, to
enhance these networks' capabilities. Moreover, we delve into the potential
advantages and challenges accompanying this approach. Ultimately, we pinpoint
promising avenues for future research in this rapidly advancing field, which
could bring us closer to understanding the essence of intelligence. | Samuel Schmidgall, Jascha Achterberg, Thomas Miconi, Louis Kirsch, Rojin Ziaei, S. Pardis Hajiseyedrazi, Jason Eshraghian | 2023-05-18T18:34:29Z | http://arxiv.org/abs/2305.11252v1 | # Brain-inspired learning in artificial neural networks: a review
###### Abstract
Artificial neural networks (ANNs) have emerged as an essential tool in machine learning, achieving remarkable success across diverse domains, including image and speech generation, game playing, and robotics. However, there exist fundamental differences between ANNs' operating mechanisms and those of the biological brain, particularly concerning learning processes. This paper presents a comprehensive review of current brain-inspired learning representations in artificial neural networks. We investigate the integration of more biologically plausible mechanisms, such as synaptic plasticity, to enhance these networks' capabilities. Moreover, we delve into the potential advantages and challenges accompanying this approach. Ultimately, we pinpoint promising avenues for future research in this rapidly advancing field, which could bring us closer to understanding the essence of intelligence.
sscmni46@jhu.edu
## Introduction
The dynamic interrelationship between memory and learning is a fundamental hallmark of intelligent biological systems. It empowers organisms to not only assimilate new knowledge but also to continuously refine their existing abilities, enabling them to adeptly respond to changing environmental conditions. This adaptive characteristic is relevant on various time scales, encompassing both long-term learning and rapid short-term learning via short-term plasticity mechanisms, highlighting the complexity and adaptability of biological neural systems [1, 2, 3]. The development of artificial systems that draw high-level, hierarchical inspiration from the brain has been a long-standing scientific pursuit spanning several decades. While earlier attempts were met with limited success, the most recent generation of artificial intelligence (AI) algorithms have achieved significant breakthroughs in many challenging tasks. These tasks include, but are not limited to, the generation of images and text from human-provided prompts [4, 5, 6, 7], the control of complex robotic systems [8, 9, 10], and the mastery of strategy games such as Chess and Go [11] and a multimodal amalgamation of these [12].
While ANNs have made significant advancements in various fields, there are still major limitations in their ability to continuously learn and adapt like biological brains [13, 14, 15]. Unlike current models of machine intelligence, animals can learn throughout their entire lifespan, which is essential for stable adaptation to changing environments. This ability, known as lifelong learning, remains a significant challenge for artificial intelligence, which primarily optimizes problems consisting of fixed labeled datasets, causing it to struggle generalizing to new tasks or retain information across repeated learning iterations [14]. Addressing this challenge is an active area of research, and the potential implications of developing AI with lifelong learning abilities could have far-reaching impacts across multiple domains.
In this paper, we offer a unique review that seeks to identify the mechanisms of the brain that have inspired current artificial intelligence algorithms. To better understand the biological processes underlying natural intelligence, the first section will explore the low-level components that shape neuromodulation, from synaptic plasticity, to the role of local and global dynamics that shape neural activity. This will be related back to ANNs in the third section, where we compare and contrast ANNs with biological neural systems. This will give us a logical basis that seeks to justify why the brain has more to offer AI, beyond the inheritance of current artificial models. Following that, we will delve into algorithms of artificial learning that emulate these processes to improve the capabilities of AI systems. Finally, we will discuss various applications of these AI techniques in real-world scenarios, highlighting their potential impact on fields such as robotics, lifelong learning, and neuromorphic computing. By doing so, we aim to provide a comprehensive understanding of the interplay between learning mechanisms in the biological brain and artificial intelligence, highlighting the potential benefits that can arise from this synergistic relationship. We hope our findings will encourage a new generation of brain-inspired learning algorithms.
## Processes that support learning in the brain
A grand effort in neuroscience aims at identifying the underlying processes of learning in the brain. Several mechanisms have been proposed to explain the biological basis of
learning at varying levels of granularity-from the synapse to population-level activity. However, the vast majority of biologically plausible models of learning are characterized by _plasticity_ that emerges from the interaction between local and global events [16]. Below, we introduce various forms of plasticity and how these processes interact in more detail.
Synaptic plasticity Plasticity in the brain refers to the capacity of experience to modify the function of neural circuits. The plasticity of synapses specifically refers to the modification of the strength of synaptic transmission based on activity and is currently the most widely investigated mechanism by which the brain adapts to new information [17, 18]. There are two broader classes of synaptic plasticity: short- and long-term plasticity. Short-term plasticity acts on the scale of tens of milliseconds to minutes and has an important role in short-term adaptation to sensory stimuli and short-lasting memory formation [19]. Long-term plasticity acts on the scale of minutes to more, and is thought to be one of the primary processes underlying long-term behavioral changes and memory storage [20].
NeuromodulationIn addition to the plasticity of synapses, another important mechanism by which the brain adapts to new information is through neuromodulation [3, 21, 22]. Neuromodulation refers to the regulation of neural activity by chemical signaling molecules, often referred to as neurotransmitters or hormones. These signaling molecules can alter the excitability of neural circuits and the strength of synapses, and can have both short- and long-term effects on neural function. Different types of neuromodulation have been identified, including acetylcholine, dopamine, and serotonin, which have been linked to various functions such as attention, learning, and emotion [23]. Neuromodulation has been suggested to play a role in various forms of plasticity, including short- [19] and long-term plasticity [22].
MetaplasticityThe ability of neurons to modify both their function and structure based on activity is what characterizes synaptic plasticity. These modifications which occur at the synapse must be precisely organized so that changes occurs at the right time and by the right quantity. This regulation of plasticity is referred to as _metaplasticity_, or the 'plasticity of synaptic plasticity,' and plays a vital role in safeguarding the constantly changing brain from its own saturation [24, 25, 26]. Essentially, metaplasticity alters the ability of synapses to generate plasticity by inducing a change in the physiological state of neurons or synapses. Metaplasticity has been proposed as a fundamental mechanism in memory stability, learning, and regulating neural excitability. While similar, metaplasticity can be distinguished from neuromodulation, with metaplastic and neuromodulatory events often overlapping in time during the modification of a synapse.
NeurogenesisThe process by which newly formed neurons are integrated into existing neural circuits is referred to as _neurogenesis_. Neurogenesis is most active during embryonic development, but is also known to occur throughout the adult lifetime, particularly in the subventricular zone of the lateral ventricles [27], the amygdala [28], and in the dentate gyrus of the hippocampal formation [29]. In adult mice, neurogenesis has been demonstrated to increase when living in enriched environments versus in standard laboratory conditions [30]. Additionally, many environmental factors such as exercise [31, 32] and stress [33, 34] have been demonstrated to change the rate of neurogenesis in the rodent hippocampus. Overall, while the role of neurogenesis in learning is not fully understood, it is believe to play an important role in support
Figure 1: Graphical depiction of long-term potentiation (LTP) and depression (LTD) at the synapse biological neurons. \(A\). Synaptically connected pre- and post-synaptic neurons. \(B\). Synaptic terminal, the connection point between neurons. \(C\). Synaptic growth (LTP) and synaptic weakening (LTD). \(D\). Top. Membrane potential dynamics in the axon hillok of the neuron. _Bottom_. Pre- and post-synaptic spikes. \(E\). Spike-timing dependent plasticity curve depicting experimental recordings of LTP and LTD.
ing learning in the brain.
Glial CellsGlial cells, or neuroglia, play a vital role in supporting learning and memory by modulating neurotransmitter signaling at synapses, the small gaps between neurons where neurotransmitters are released and received [35]. Astrocytes, one type of glial cell, can release and reuptake neurotransmitters, as well as metabolize and detoxify them. This helps to regulate the balance and availability of neurotransmitters in the brain, which is essential for normal brain function and learning [36]. Microglia, another type of glial cell, can also modulate neurotransmitter signaling and participate in the repair and regeneration of damaged tissue, which is important for learning and memory [37]. In addition to repair and modulation, structural changes in synaptic strength require the involvement of different types of glial cells, with the most notable influence coming from astrocytes [36]. However, despite their crucial involvement, we have yet to fully understand the role of glial cells. Understanding the mechanisms by which glial cells support learning at synapses are important areas of ongoing research.
## Deep neural networks and plasticity
Artificial and spiking neural networksArtificial neural networks have played a vital role in machine learning over the past several decades. These networks have catalyzed tremendous progress toward solving a variety of challenging problems. Many of the most impressive accomplishments in AI have been realized through the use of large ANNs trained on tremendous amounts of data. While there have been many technical advancements, many of the accomplishments in AI can be explained by innovations in computing technology, such as large-scale GPU accelerators and the accessibility of data. While the application of large-scale ANNs have led to major innovations, there do exist many challenges ahead. A few of the most pressing practical limitations of ANNs is that they are not efficient in terms of power consumption and they are not very good at processing dynamic and noisy data. In addition, ANNs are not able to learn beyond their training period (e.g. during deployment) from which data assumes an independent and identically distributed (IID) form without time, which does not reflect physical reality where information is highly temporally and spatially correlated. These limitations have led to their application requiring vast amounts of energy when deployed in large-scale settings [38] and has also presented challenges toward integration into edge computing devices, such as robotics and wearable devices [39].
Looking toward neuroscience for a solution, researchers have been exploring spiking neural networks (SNNs) as an alternative to ANNs [40]. SNNs are a class of ANNs that are designed to more closely resemble the behavior of biological neurons. The primary difference between ANNs and SNNs is the idea that SNNs incorporate the notion of timing into their communication. Spiking neurons accumulate information across time from connected (presynaptic) neurons (or via sensory input) in the form of a membrane potential. Once a neuron's membrane potential surpasses a threshold value, it fires a binary "spike" to all of its outgoing (postsynaptic) connections. Spikes have been theoretically demonstrated to contain more information than rate-based representations of information (such as in ANNS) despite being both binary and sparse in time [41]. Additionally, modelling studies have shown advantages of SNNs, such as better energy efficiency, the ability to process noisy and dynamic data, and the potential for more robust and fault-tolerant computing [42]. These benefits are not solely attributed to their increased biological plausibility, but also to the unique properties of spiking neural networks that distinguish them from conventional artificial neural networks. A simple working model of a leaky integrate-and-fire neuron is described below:
\[\tau_{m}\frac{dV}{dt}=E_{L}-V(t)+R_{m}I_{inj}(t)\]
where \(V(t)\) is the membrane potential at time \(t\), \(\tau_{m}\) is the membrane time constant, \(E_{L}\) is the resting potential, \(R_{m}\) is the membrane resistance, \(I_{inj}(t)\) is the injected current, \(V_{th}\) is the threshold potential, and \(V_{reset}\) is the reset potential. When the membrane potential reaches the threshold potential, the neuron spikes and the membrane potential is reset to the reset potential (if \(V(t)\geq V_{\text{th}}\) then \(V(t)\gets V_{\text{reset}}\)).
Despite these potential advantages, SNNs are still in the early stages of development, and there are several challenges that need to be addressed before they can be used more widely. One of the most pressing challenges is regarding how to optimize the synaptic weights of these models, as traditional backpropagation-based methods from ANNs fail due to the discrete and sparse nonlinearity. Irrespective of these challenges, there do exist some works that push the boundaries of what was thought possible with modern spiking networks, such as large spike-based transformer models [43]. Spiking models are of great importance for this review since they form the basis of many brain-inspired learning algorithms.
Hebbian and spike-timing dependent plasticityHebbian and spike-timing dependent plasticitySTDP are two prominent models of synaptic plasticity that play important roles in shaping neural circuitry and behavior. The Hebbian learning rule, first proposed by Donald Hebb in 1949 [44], posits that synapses between neurons are strengthened when they are coactive, such that the activation of one neuron causally leads to the activation of another. STDP, on the other hand, is a more recently proposed model of synaptic plasticity that takes into account the precise timing of pre- and post-synaptic spikes [45] to determine synaptic strengthening or weakening. It is widely believed that STDP plays a key role in the formation and refinement of neural circuits during development and in the ongoing adaptation of circuits in response to experience. In the following subsection, we will provide an overview of the basic principles of Hebbian learning and STDP.
Hebbian learningHebbian learning is based on the idea that the synaptic strength between two neurons should be increased if they are both active at the same time, and decreased
if they are not. Hebb suggested that this increase should occur when one cell "repeatedly or persistently takes part in firing" another cell (with causal implications). However this principle is often expressed correlatively, as in the famous aphismism "cells that fire together, wire together" (variously attributed to Siegrid Lowel [46] for Carla Shatz [47])1 Hebbian learning is often used as an unsupervised learning algorithm, where the goal is to identify patterns in the input data without explicit feedback [48]. An example of this process is the Hopfield network, in which large binary patterns are easily stored in a fully-connected recurrent network by applying a Hebbian rule to the (symmetric) weights [49]. It can also be adapted for use in supervised learning algorithms, where the rule is modified to take into account the desired output of the network. In this case, the Hebbian learning rule is combined with a teaching signal that indicates the correct output for a given input.
Footnote 1: As Hebb himself noted, the general idea has a long history. In their review, Brown and colleagues cite William James: “When two elementary brain-processes have been active together or in immediate succession, one of them, on reoccurring, tends to propagate its excitement into the other.”
A simple Hebbian learning rule can be described mathematically using the equation:
\[\Delta w_{ij}=\eta x_{i}x_{j}\]
where \(\Delta w_{ij}\) is the change in the weight between neuron \(i\) and neuron \(j\), \(\eta\) is the learning rate, and \(x_{i}\) "activity" in neurons \(i\), often thought of as the neuron firing rate. This rule states that if the two neurons are activated at the same time, their connection should be strengthened.
One potential drawback of the basic Hebbian rule is its instability. For example, if \(x_{i}\) and \(x_{j}\) are initially weakly positively correlated, this rule will increase the weight between the two, which will in turn reinforce the correlation, leading to even larger weight increases, etc. Thus, some form of stabilization is needed. This can be done simply by bounding the weights, or by more complex rules that take into account additional factors such as the history of the pre- and post-synaptic activity or the influence of other neurons in the network (see ref. [50] for a practical review of many such rules).
Three-factor rules: Hebbian reinforcement learningBy incorporating information about rewards, Hebbian learning can also be used for reinforcement learning. An apparently plausible idea is simply to multiply the Hebbian update by the reward directly, as follows:
\[\Delta w_{ij}=\eta x_{i}x_{j}R\]
with R being the reward (for this time step or for the whole episode). Unfortunately this idea does not produce reliable reinforcement learning. This can be perceived intuitively by noticing that, if \(w_{ij}\) is already at its optimal value, the rule above will still produce a net change and thus drive \(w_{ij}\) away from the optimum.
More formally, as pointed out by Fremaux et al. [53], to properly track the actual covariance between inputs, outputs and rewards, at least one of the terms in the \(x_{i}x_{j}R\) product must be centered, that is, replaced by zero-mean fluctuations around its expected value. One possible solution is to center the rewards, by subtracting a baseline from \(R\), generally equal to the expected value of \(R\) for this trial. While helpful, in practice this solution is generally insufficient.
A more effective solution is to remove the mean value from the _outputs_. This can be done easily by subjecting neural activations \(x_{j}\) to occasional random perturbations \(\Delta x_{j}\), taken from a suitable zero-centered distribution - and then using the perturbation \(\Delta x_{j}\), rather than the raw post-synaptic activation \(x_{j}\), int he three-factor product:
\[\Delta w_{ij}=\eta x_{i}\Delta x_{j}R\]
This is the so-called "node perturbation" rule proposed by Fiete and Seung [54, 55]. Intuitively, notice that the effect of the \(x_{i}\Delta x_{j}\) increment is to push future \(x_{j}\) responses (when encountering the same \(x_{i}\) input) in the direction of the perturbation: larger if the perturbation was positive, smaller if the perturbation was negative. Multiplying this shift by \(R\) results in pushing future responses towards the perturbation if \(R\) was positive, and away from it if \(R\) was negative. Even if \(R\) is not zero-mean, the net effect (in expectation) will still be to drive \(w_{ij}\) towards higher \(R\), though the variance will be higher.
This rule turns out to implement the REINFORCE algorithm (Williams' original paper [56] actually proposes an algorithm which is exactly node-perturbation for spiking stochastic neurons), and thus estimates the theoretical gradient of \(R\) over \(w_{ij}\). It an also be implemented in a biologically plausible manner, allowing recurrent networks to learn non-trivial cognitive or motor tasks from sparse, delayed rewards [57].
Spike-timing dependent plasticitySpike-timing dependent plasticitySTDP is a theoretical model of synaptic plasticity that allows the strength of connections between neurons to be modified based on the relative timing of their spikes. Unlike the Hebbian learning rule, which relies on the simultaneous activation of pre- and post-synaptic neurons, STDP takes into account the precise timing of the pre- and post-synaptic spikes. Specifically, STDP suggests that if a presynaptic neuron fires just before a postsynaptic neuron, the connection between them should be strengthened. Conversely, if the post-synaptic neuron fires just before the presynaptic neuron, the connection should be weakened.
STDP has been observed in a variety of biological systems, including the neocortex, hippocampus, and cerebellum. The rule has been shown to play a crucial role in the development and plasticity of neural circuits, including learning and memory processes. STDP has also been used as a basis for the development of artificial neural networks, which are designed to mimic the structure and function of the brain.
The mathematical equation for STDP is more complex than the Hebbian learning rule and can vary depending on the specific implementation. However, a common formulation is:
\[\Delta w_{ij}=\begin{cases}A_{+}\exp(-\Delta t/\tau_{+})&\text{if }\Delta t>0\\ -A_{-}\exp(\Delta t/\tau_{-})&\text{if }\Delta t<0\end{cases}\]
where \(\Delta w_{ij}\) is the change in the weight between neuron \(i\) and neuron \(j\), \(\Delta t\) is the time difference between the pre- and post-synaptic spikes, \(A_{+}\) and \(A_{-}\) are the amplitudes of the potentiation and depression, respectively, and \(\tau_{+}\) and \(\tau_{-}\) are the time constants for the potentiation and depression, respectively. This rule states that the strength of the connection between the two neurons will be increased or decreased depending on the timing of their spikes relative to each other.
## Processes that support learning in artificial neural networks
There are two primary approaches for weight optimization in artificial neural networks: error-driven global learning and brain-inspired local learning. In the first approach, the network weights are modified by driving a global error to its minimum value. This is achieved by delegating error to each weight and synchronizing modifications between each weight. In contrast, brain-inspired local learning algorithms aim to learn in a more biologically plausible manner, by modifying weights from dynamical equations using locally available information. Both optimization approaches have unique benefits and drawbacks. In the following sections we will discuss the most utilized form of error-driven global learning, backpropagation, followed by in-depth discussions of brain-inspired local algorithms. It is worth mentioning that these two approaches are not mutually exclusive, and will often be integrated in order to compliment their respective strengths [58, 59, 60, 61].
Backpropagation.Backpropagation is a powerful error-driven global learning method which changes the weight of connections between neurons in a neural network to produce a desired target behavior [62]. This is accomplished through the use of a quantitative metric (an objective function) that describes the quality of a behavior given sensory information (e.g. visual input, written text, robotic joint positions). The backpropagation algorithm consists of two phases: the forward pass and the backward pass. In the forward pass, the input is propagated through the network, and the output is calculated. During the backward pass, the error between the predicted output and the "true" output is calculated, and the gradients of the loss function with respect to the weights of the network are calculated by propagating the error backwards through the network. These gradients are then used to update the weights of the network using an optimization algorithm such as stochastic gradient descent. This process is repeated for many iterations until the weights converge to a set of values that minimize the loss function.
Lets take a look at a brief mathematical explanation of backpropagation. First, we define a desired loss function, which is a function of the network's outputs and the true values:
\[L(y,\hat{y})=\frac{1}{2}\sum_{i}(y_{i}-\hat{y}_{i})^{2}\]
where \(y\) is the true output and \(\hat{y}\) is the network's output. In this case we are minimizing the squared error, but could very well optimize for any smooth and differentiable loss function. Next, we use the chain rule to calculate the gradient of the
Fig. 2: **There are strong parallels between artificial and brain-like learning algorithms.**_Left._Top._Graphical depiction of a rodent and a cluster of interconnected neurons. Middle._Robert is participating in the _Morris_ water maze task to test its learning capabilities. _Bottom_A graphical depiction of biological pre- and post-synaptic pyramidal neuron. _Right._Top._A rodent musculoskeletal physics model with artificial neural network policy and critic heads regulating learning and control (see _ref.[58]_). _Middle._A virtual maze environment used for benchmarking learning algorithms (see _ref.[58]_). _Bottom._An artificial pre- and post-synaptic neuron with forward propagation equations.
loss with respect to the weights of the network. Let \(w^{l}_{ij}\) be the weight between neuron \(i\) in layer \(l\) and neuron \(j\) in layer \(l+1\), and let \(a^{l}_{i}\) be the activation of neuron \(i\) in layer \(l\). Then, the gradients of the loss with respect to the weights are given by:
\[\frac{\partial L}{\partial w^{l}_{ij}}=\frac{\partial L}{\partial a^{l+1}_{j}} \frac{\partial a^{l+1}_{j}}{\partial z^{l+1}_{j}}\frac{\partial z^{l+1}_{j}}{ \partial w^{l}_{ij}}\]
where \(z^{l+1}_{j}\) is the weighted sum of the inputs to neuron \(j\) in layer \(l+1\). We can then use these gradients to update the weights of the network using gradient descent:
\[w^{l}_{ij}=w^{l}_{ij}-\alpha\frac{\partial L}{\partial w^{l}_{ij}}\]
where \(\alpha\) is the learning rate. By repeatedly calculating the gradients and updating the weights, the network gradually learns to minimize the loss function and make more accurate predictions. In practice, gradient descent methods are often combined with approaches to incorporate momentum in the gradient estimate, which has been shown to significantly improve generalization [63].
The impressive accomplishments of backpropagation have led neuroscientists to investigate whether it can provide a better understanding of learning in the brain. While it remains debated as to whether backpropagation variants could occur in the brain [64, 65], it is clear that backpropagation in its current formulation is biologically implausible. Alternative theories suggest complex feedback circuits or the interaction of local activity and top-down signals (a "third-factor") could support a similar form of backprop-like learning [64].
Despite its impressive performance there are still fundamental algorithmic challenges that follow from repeatedly applying backpropagation to network weights. One such challenge is a phenomenon known as catastrophic forgetting, where a neural network forgets previously learned information when training on new data [13]. This can occur when the network is fine-tuned on new data or when the network is trained on a sequence of tasks without retaining the knowledge learned from previous tasks. Catastrophic forgetting is a significant hurdle for developing neural networks that can continuously learn from diverse and changing environments. Another challenge is that backpropagation requires backpropagating information through all the layers of the network, which can be computationally expensive and time-consuming, especially for very deep networks. This can limit the scalability of deep learning algorithms and make it difficult to train large models on limited computing resources. Nonetheless, backpropagation has remained the most widely used and successful algorithm for applications involving artificial neural networks
### Evolutionary and genetic algorithms
Another class of global learning algorithms that has gained significant attention in recent years are evolutionary and genetic algorithms. These algorithms are inspired by the process of natural selection and, in the context of ANNs, aim to optimize the weights of a neural network by mimicking the evolutionary process.
In _genetic algorithms_[66], a population of neural networks is initialized with random weights, and each network is evaluated on a specific task or problem. The networks that perform better on the task are then selected for reproduction, whereby they produce offspring with slight variations in their weights. This process is repeated over several generations, with the best-performing networks being used for reproduction, making their behavior more likely across generations. _Evolutionary algorithms_ operate similarly to genetic algorithms but use a different approach by approximating a stochastic gradient [67, 68]. This is accomplished by perturbing the weights and combining the network objective function performances to update the parameters. This results in a more global search of the weight space that can be more efficient at finding optimal solutions compared to local search methods like backpropagation [69].
One advantage of these algorithms is their ability to search a vast parameter space efficiently, making them suitable for problems with large numbers of parameters or complex search spaces. Additionally, they do not require a differentiable objective function, which can be useful in scenarios where the objective function is difficult to define or calculate (e.g. spiking neural networks). However, these algorithms also have some drawbacks. One major limitation is the high computational cost required to evaluate and evolve a large population of networks. Another challenge is the potential for the algorithm to become stuck in local optima or to converge too quickly, resulting in suboptimal solutions. Additionally, the use of random mutations can lead to instability and unpredictability in the learning process.
Regardless, evolutionary and genetic algorithms have shown promising results in various applications, particularly when optimizing non-differentiable and non-trivial parameter spaces. Ongoing research is focused on improving the efficiency and scalability of these algorithms, as well as discovering where and when it makes sense to use these approaches instead of gradient descent.
## Brain-inspired representations of learning in artificial neural networks
### Local learning algorithms
Unlike global learning algorithms such as backpropagation, which require information to be propagated through the entire network, local learning algorithms focus on updating synaptic weights based on local information from nearby or synaptically connected neurons. These approaches are often strongly inspired by the plasticity of biological synapses. As will be seen, by leveraging local learning algorithms, ANNs can learn more efficiently and adapt to changing input distributions, making them better suited for real-world applications. In this section, we will review recent advances in brain-inspired local learning algorithms and their potential for improving the performance and robustness of ANNs.
### Backpropagation-derived local learning.
Backpropagation-derived local learning algorithms are a class of local learning algorithms that attempt to emulate
the mathematical properties of backpropagation. Unlike the traditional backpropagation algorithm, which involves propagating error signals back through the entire network, backpropagation-derived local learning algorithms update synaptic weights based on local error gradients computed using backpropagation. This approach is computationally efficient and allows for online learning, making it suitable for applications where training data is continually arriving.
One prominent example of backpropagation-derived local learning algorithms is the Feedback Alignment (FA) algorithm [70, 71], which replaces the weight transport matrix used in backpropagation with a fixed random matrix, allowing the error signal to propagate from direct connections thus avoiding the need for backpropagating error signals. A brief mathematical description of feedback alignment is as follows: let \(w^{out}\) be the weight matrix connecting the last layer of the network to the output, and \(w^{in}\) be the weight matrix connecting the input to the first layer. In Feedback Alignment, the error signal is propagated from the output to the input using the fixed random matrix \(B\), rather than the transpose of \(w^{out}\). The weight updates are then computed using the product of the input and the error signal, \(\Delta w^{in}=-\eta xz\) where \(x\) is the input, \(\eta\) is the learning rate, and \(z\) is the error signal propagated backwards through the network, similar to traditional backpropagation.
Direct Feedback Alignment [71] (DFA) simplifies the weight transport chain compared with FA by directly connecting the output layer error to each hidden layer. The Sign-Symmetry (SS) algorithm is similar to FA except the feedback weights symmetrically share signs. While FA has exhibited impressive results on small datasets like MNIST and CIFAR, their performance on larger datasets such as ImageNet is often suboptimal [72]. On the other hand, recent studies have shown that the SS algorithm algorithm is capable of achieving comparable performance to backpropagation, even on large-scale datasets [73].
Eligibility propagation [59, 74] (e-prop) extends the idea of feedback alignment for spiking neural networks, combining the advantages of both traditional error backpropagation and biologically plausible learning rules, such as spike-timing-dependent plasticity (STDP). For each synapse, the e-prop algorithm computes and maintains an eligibility trace \(e_{ji}(t)=\frac{dz_{j}(t)}{dW_{ji}}\). Eligibility traces measure the total contribution of this synapse to the neuron's current output, taking into account all past inputs [3]. This can be computed and updated in a purely forward manner, without backward passes. This eligibility trace is then multiplied by an estimate of the gradient of the error over the neuron's output \(L_{j}(t)=\frac{dE(t)}{dz_{j}(t)}\). to obtain the actual weight gradient \(\frac{dE(t)}{dW_{ji}}\). \(L_{j}(t)\) itself is computed from the error at the output neurons, either by using symmetric feedback weights or by using fixed feedback weights, as in feedback alignment. A possible drawback of e-prop is that it requires a real-time error signal \(L_{t}\) at each point in time, since it only takes into account past events and is blind to future errors. In particular, it cannot learn from delayed error signals that extend beyond the time scales of individual neurons (including short-term adaptation) [59], in contrast with methods like REINFORCE and node-perturbation.
In the work of ref. [75, 76] a normative theory for synaptic learning based on recent genetic findings [77] of neuronal signaling architectures is demonstrated. They propose that neurons communicate their contribution to the learning outcome to nearby neurons via cell-type-specific local neuromodulation, and that neuron-type diversity and neuron-type-specific local neuromodulation may be critical pieces of the biological credit-assignment puzzle. In this work, the authors instantiate a simplified computational model based on eligibility propagation to explore this theory and show that their model, which includes both dopamine-like temporal difference and neuropeptide-like local modulatory signaling, leads to improvements over previous methods such as e-prop and feedback alignment.
Generalization propertiesTechniques in deep learning have made tremendous strides toward understanding the generalization of their learning algorithms. A particularly useful discovery was that flat minima tend to lead to better generalization [78]. What is meant by this is that, given a perturbation \(\epsilon\) in the parameter space (synaptic weight values) more significant performance degradation is observed around _narrower_ minima. Learning algorithms that find _flatter_ minima in parameter space ultimately lead to better generalization.
Recent work has explored the generalization properties exhibited by (brain-inspired) backpropagation-derived local learning rules [79]. Compared with backpropagation through time, backpropagation-derived local learning rules exhibit worse and more variable generalization which does not improve by scaling the step size due to the gradient approximation being poorly aligned with the true gradient. While it is perhaps unsurprising that _local approximations_ of an optimization process are going to have worse generalization properties than their complete counterpart, this work opens the door toward asking new questions about what the best approach toward designing brain-inspired learning algorithms is. It also opens the question as to whether backpropagation-derived local learning rules are even worth exploring given that they are fundamentally going to exhibit _sub-par_ generalization.
In conclusion, while backpropagation-derived local learning rules present themselves as a promising approach to designing brain-inspired learning algorithms, they come with limitations that must be addressed. The poor generalization of these algorithms highlights the need for further research to improve their performance and to explore alternative brain-inspired learning rules. It also opens the question as to whether backpropagation-derived local learning rules are even worth exploring given that they are fundamentally going to exhibit sub-par generalization.
Meta-optimized plasticity rulesMeta-optimized plasticity rules offer an effective balance between error-driven global learning and brain-inspired local learning. Meta-learning can be defined as automation of the search for learning algorithms themselves, where, instead of relying on hu
man engineering to describe a learning algorithm, a search process to find that algorithm is employed [80]. The idea of meta-learning naturally extends to brain-inspired learning algorithms, such that the brain-inspired mechanism of learning itself can be optimized thereby allowing for _discovery_ of more efficient learning without manual tuning of the rule. In the following section, we discuss various aspects of this research starting with differentiably optimized synaptic plasticity rules.
### Differentiable plasticity
One instantiation of this principle in the literature is _differentiable plasticity_, which is a framework that focuses on optimizing synaptic plasticity rules in neural networks through gradient descent [81, 82]. In these rules, the plasticity rules are described in such a way that the parameters governing their dynamics are differentiable, allowing for backpropagation to be used for meta-optimization of the plasticity rule parameters (e.g. the \(\eta\) term in the simple hebbian rule or the \(A_{+}\) term in the STDP rule). This allows the weight dynamics to precisely solve a task that requires the weights to be optimized during execution time, referred to as intra-lifetime learning.
Differentiable plasticity rules are also capable of the differentiable optimization of neuromodulatory dynamics [60, 82]. This framework includes two main variants of neuromodulation: global neuromodulation, where the direction and magnitude of weight changes is controlled by a network-output-dependent global parameter, and retroactive neuromodulation, where the effect of past activity is modulated by a dopamine-like signal within a short time window. This is enable by the use of eligibility traces, which are used to keep track of which synapses contributed to recent activity, and the dopamine signal modulates the transformation of these traces into actual plastic changes.
Methods involving differentiable plasticity have seen improvements in a wide range of applications from sequential associative tasks [83], familiarity detection [84], and robotic noise adaptation [60]. This method has also been used to optimize short-term plasticity rules [84, 85] which exhibit improved performance in reinforcement and temporal supervised learning problems. While these methods show much promise, differentiable plasticity approaches take a tremendous amount of memory, as backpropagation is used to optimize multiple parameters _for each synapse_ through time. Practical advancements with these methods will likely require parameter sharing [86] or a more memory-efficient form of backpropagation [87].
### Plasticity with spiking neurons
Recent advances in backpropagating through the non-differentiable part of spiking neurons with surrogate gradients have allowed for differentiable plasticity to be used to optimize plasticity rules in spiking neural networks [60]. In ref. [61] the capability of this optimization paradigm is demonstrated through the use of a differentiable spike-timing dependent plasticity rule to enable "learning to learn" on an online one-shot continual learning problem and on an online one-shot image class recognition problem. A similar method was used to optimize the third-factor signal using the gradient approximation of e-prop as the plasticity rule, introducing a meta-optimization form of e-prop [88]. Recurrent neural networks tuned by evolution can also be used for meta-optimized learning rules. Evolyable Neural Units [89] (ENUs) introduce a gating structure that controls how the input is processed, stored, and dynamic parameters are updated. This work demonstrates the evolution of individual somatic and synaptic compartment models of neurons and show that a network of ENUs can learn to solve a T-maze environment task, independently discovering spiking dynamics and reinforcement-type learning rules.
### Plasticity in RNNs and Transformers
Independent of research aiming at learning plasticity using update rules, Transformers have recently been shown to be good intra-lifetime learners [90, 91, 9]. The process of in-context learning works not through the update of synaptic weights but purely within the network activations. Like in Transformers, this process can also happen in recurrent neural networks [92]. While in-context learning appears to be a different mechanism from synaptic plasticity, these processes have been demonstrated to exhibit a strong relationship. One exciting connection discussed in the literature is the realization that parameter-sharing of the meta-learner often leads to the _interpretation of activations as weights_[93]. This demonstrates that, while these models may have fixed weights, they exhibit some of the same learning capabilities as models with plastic weights. Another connection is that self-attention in the Transformer involves outer and inner products that can be cast as learned weight updates [94] that can even implement gradient descent [95, 96].
### Evolutionary and genetic meta-optimization
Much like differentiable plasticity, evolutionary and genetic algorithms have been used to optimize the parameters of plasticity rules on a variety of applications [97], including: adaptation to limb damage on robotic systems [98, 99]. Recent work has also enabled the optimization of both plasticity coefficients and plasticity rule _equations_ through the use of Cartesian genetic programming [100], presenting an automated approach for discovering biologically plausible plasticity rules based on the specific task being solved. In these methods, the genetic or evolutionary optimization process acts similarly to the differentiable process such that it optimizes the plasticity parameters in an outer-loop process, while the plasticity rule optimizes the reward in an inner-loop process. These methods are appealing since they have a much lower memory footprint compared to differentiable methods since they do not require backpropagating errors through time. However, while memory efficient, they often require a tremendous amount of data to get comparable performance to gradient-based methods [101].
### Self-referential meta-learning
While synaptic plasticity has two-levels of learning, the meta-learner, and the discovered learning rule, self-referential meta-learning [102, 103] extends this hierarchy. In plasticity approaches only a subset of the network parameters are updated (e.g. the synaptic weights), whereas the meta-learned update rule remains fixed after meta-optimization. Self-referential architectures
enable a neural network to modify all of its parameters in recursive fashion. Thus, the learner can also modify the meta-learner. This in principles allows arbitrary levels of learning, meta-learning, meta-meta-learning, etc. Some approaches meta-learn the parameter initialization of such a system [102, 104]. Finding this initialization still requires a hard-wired meta-learner. In other works the network self-modifies in a way that eliminates even this meta-learner [103, 105]. Sometimes the learning rule to be discovered has structural search space restrictions which simplify self-improvement where a gradient-based optimizer can discover itself [106] or an evolutionary algorithm can optimize itself [107]. Despite their differences, both synaptic plasticity, as well as self-referential approaches, aim to achieve self-improvement and adaptation in neural networks.
### Generalization of meta-optimized learning rules
The extent to which discovered learning rules generalize to a wide range of tasks is a significant open question-in particular, when should they replace manually derived general-purpose learning rules such as backpropagation? A particular observation that poses a challenge to these methods is that when the search space is large and few restrictions are put on the learning mechanism [92, 108, 109], generalization is shown to become more difficult. However, toward amending this, in variable shared meta learning [93] flexible learning rules were parameterized by parameter-shared recurrent neural networks that locally exchange information to implement learning algorithms that generalize across classification problems not seen during meta-optimization. Similar results have also been shown for the discovery of reinforcement learning algorithms [110].
## Applications of brain-inspired learning
### Neuromorphic Computing
Neuromorphic computing represents a paradigm shift in the design of computing systems, with the goal of creating hardware that mimics the structure and functionality of the biological brain [111, 42, 112]. This approach seeks to develop artificial neural networks that not only replicate the brain's learning capabilities but also its energy efficiency and inherent parallelism. Neuromorphic computing systems often incorporate specialized hardware, such as neuromorphic chips or memristive devices, to enable the efficient execution of brain-inspired learning algorithms [112]. These systems have the potential to drastically improve the performance of machine learning applications, particularly in edge computing and real-time processing scenarios.
A key aspect of neuromorphic computing lies in the development of specialized hardware architectures that facilitate the implementation of spiking neural networks, which more closely resemble the information processing mechanisms of biological neurons. Neuromorphic systems operate based on the principle of brain-inspired local learning, which allows them to achieve high energy efficiency, low-latency processing, and robustness against noise, which are critical for real-world applications [113]. The integration of brain-inspired learning techniques with neuromorphic hardware is vital for the successful application of this technology.
In recent years, advances in neuromorphic computing have led to the development of various platforms, such as Intel's Loihi [114], IBM's TrueNorth [115], and SpiNNaker [116], which offer specialized hardware architectures for implementing SNNs and brain-inspired learning algorithms. These platforms provide a foundation for further exploration of neuromorphic computing systems, enabling researchers to design, simulate, and evaluate novel neural network architectures and learning rules. As neuromorphic computing continues to progress, it is expected to play a pivotal role in the future of artificial intelligence, driving innovation and enabling the development of more efficient, versatile, and biologically plausible learning systems.
### Robotic learning
Brain-inspired learning in neural networks has the potential to overcome many of the current challenges present in the field of robotics by enabling robots
Figure 3: A feedforward neural network computes an output given an input by propagating the input information downstream. The precise value of the output is determined by the weight of synaptic coefficients. To improve the output for a task given an input, the synaptic weights are modified. _Synaptic Plasticity_ algorithms represent computational models that emulate the brain’s ability to strengthen or weaken synapses-connections between neurons-based on their activity, thereby facilitating learning and memory formation. _Three-Factor Plasticity_ refers to a model of synaptic plasticity in which changes to the strength of neural connections are determined by three factors: pre-synaptic activity, post-synaptic activity, and a undorulatory signal, facilitating more nuanced and adaptive learning processes. The _Feedback Alignment_ algorithm is a learning technique in which artificial neural networks are trained using random, fixed feedback connections rather than symmetric weight matrices, demonstrating that successful learning can occur without precise backpropagation. _Backpropagation_ is a fundamental algorithm in machine learning and artificial intelligence, used to train neural networks by calculating the gradient of the loss function with respect to the weights in the network.
to learn and adapt to their environment in a more flexible way [117, 118]. Traditional robotics systems rely on preprogrammed behaviors, which are limited in their ability to adapt to changing conditions. In contrast, as we have shown in this review, neural networks can be trained to adapt to new situations by adjusting their internal parameters based on the data they receive.
Because of their natural relationship to robotics, brain-inspired learning algorithms have a long history in robotics [117]. Toward this, synaptic plasticity rules have been introduced for adapting robotic behavior to domain shifts such as motor gains and rough terrain [119, 120, 60, 121] as well as for obstacle avoidance [122, 123, 124] and articulated (arm) control [125, 126]. Brain-inspired learning rules have also been used to explore how learning occurs in the insect brain using robotic systems as an embodied medium [127, 128, 129, 130].
Deep reinforcement learning (DRL) represents a significant success of brain-inspired learning algorithms, combining the strengths of neural networks with the theory of reinforcement learning in the brain to create autonomous agents capable of learning complex behaviors through interaction with their environment [131, 132, 133]. By utilizing a reward-driven learning process emulating the activity of dopamine neurons [134], as opposed to the minimization of an e.g classification or regression error, DRL algorithms guide robots toward learning optimal strategies to achieve their goals, even in highly dynamic and uncertain environments [135, 136]. This powerful approach has been demonstrated in a variety of robotic applications, including dexterous manipulation, robotic locomotion [137], and multi-agent coordination [138].
Lifelong and online learningLifelong and online learning are essential applications of brain-inspired learning in artificial intelligence, as they enable systems to adapt to changing environments and continuously acquire new skills and knowledge [14]. Traditional machine learning approaches, in contrast, are typically trained on a fixed dataset and lack the ability to adapt to new information or changing environments. The mature brain is an incredible medium for lifelong learning, as it is constantly learning while remaining relatively fixed in size across the span of a lifetime [139]. As this review has demonstrated, neural networks endowed with brain-inspired learning mechanisms, similar to the brain, can be trained to learn and adapt continuously, improving their performance over time.
The development of brain-inspired learning algorithms that enable artificial systems to exhibit this capability has the potential to significantly enhance their performance and capabilities and has wide-ranging implications for a variety of applications. These applications are particularly useful in situations where data is scarce or expensive to collect, such as in robotics [140] or autonomous systems [141], as it allows the system to learn and adapt in real-time rather than requiring large amounts of data to be collected and processed before learning can occur.
One of the primary objectives in the field of lifelong learning is to alleviate a major issue associated with the continuous application of backpropagation on ANNs, a phenomenon known as catastrophic forgetting [13]. Catastrophic forgetting refers to the tendency of an ANN to abruptly forget previously learned information upon learning new data. This happens because the weights in the network that were initially optimized for earlier tasks are drastically altered to accommodate the new learning, thereby erasing or overwriting the previous information. This is because the backpropagation algorithm does not inherently factor in the need to preserve previously acquired information while facilitating new learning. Solving this problem has remained a significant hurdle in AI for decades. We posit that by employing brain-inspired learning algorithms, which emulate the dynamic learning mechanisms of the brain, we may be able to capitalize on the proficient problem-solving strategies inherent to biological organisms.
Toward understanding the brainThe worlds of artificial intelligence and neuroscience have been greatly benefiting from each other. Deep neural networks, specially tailored for certain tasks, show striking similarities to the human brain in how they handle spatial [142, 143, 144] and visual [145, 146, 147] information. This overlap hints at the potential of artificial neural networks (ANNs) as useful models in our efforts to better understand the brain's complex mechanics. A new movement referred to as _the neuroconnectionist research programme_[148] embodies this combined approach, using ANNs as a computational language to form and test ideas about how the brain computes. This perspective brings together different research efforts, offering a common computational framework and tools to test specific theories about the brain.
While this review highlights a range of algorithms that imitate the brain's functions, we still have a substantial amount of work to do to fully grasp how learning actually happens in the brain. The use of backpropagation, and backpropagation-like local learning rules, to train large neural networks may provide a good starting point for modelling brain function. Much productive investigation has occurred to see what processes in the brain may operate similarly to backpropagation [64], leading to new perspectives and theories in neuroscience. Even though backpropagation in its current form might not occur in the brain, the idea that the brain might develop similar internal representations to ANNs despite such different mechanisms of learning is an exciting open question that may lead to a deeper understanding of the brain and of AI.
Explorations are now extending beyond static network dynamics to the networks which unravel a function of time much like the brain. As we further develop algorithms in continual and lifelong learning, it may become clear that our models need to reflect the learning mechanisms observed in nature more closely. This shift in focus calls for the integration of local learning rules--those that mirror the brain's own methods--into ANNs.
We are convinced that adopting more biologically authentic learning rules within ANNs will not only yield the aforementioned benefits, but it will also serve to point neuroscience researchers in the right direction.. In other words, it's a strategy with a two-fold benefit: not only does it promise to invigorate
innovation in engineering, but it also brings us closer to unravelling the intricate processes at play within the brain. With more realistic models, we can probe deeper into the complexities of brain computation from the novel perspective of artificial intelligence.
## Conclusion
In this review, we investigated the integration of more biologically plausible learning mechanisms into ANNs. This further integration presents itself as an important step for both neuroscience and artificial intelligence. This is particularly relevant amidst the tremendous progress that has been made in artificial intelligence with large language models and embedded systems, which are in critical need for more energy efficient approaches for learning and execution. Additionally, while ANNs are making great strides in these applications, there are still major limitations in their ability to adapt like biological brains, which we see as a primary application of brain-inspired learning mechanisms.
As we strategize for future collaboration between neuroscience and AI toward more detailed brain-inspired learning algorithms, it's important to acknowledge that the past influences of neuroscience on AI have seldom been about a straightforward application of ready-made solutions to machines [149]. More often, neuroscience has stimulated AI researchers by posing intriguing algorithmic-level questions about aspects of animal learning and intelligence. It has provided preliminary guidance towards vital mechanisms that support learning. Our perspective is that by harnessing the insights drawn from neuroscience, we can significantly accelerate advancements in the learning mechanisms used in ANNs. Likewise, experiments using brain-like learning algorithms in AI can accelerate our understanding of neuroscience.
## Acknowledgements
We thank the OpenBioML collaborate workspace from which several of the authors of this work were connected. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE2139757.
|
2302.05294 | MoreauGrad: Sparse and Robust Interpretation of Neural Networks via
Moreau Envelope | Explaining the predictions of deep neural nets has been a topic of great
interest in the computer vision literature. While several gradient-based
interpretation schemes have been proposed to reveal the influential variables
in a neural net's prediction, standard gradient-based interpretation frameworks
have been commonly observed to lack robustness to input perturbations and
flexibility for incorporating prior knowledge of sparsity and group-sparsity
structures. In this work, we propose MoreauGrad as an interpretation scheme
based on the classifier neural net's Moreau envelope. We demonstrate that
MoreauGrad results in a smooth and robust interpretation of a multi-layer
neural network and can be efficiently computed through first-order optimization
methods. Furthermore, we show that MoreauGrad can be naturally combined with
$L_1$-norm regularization techniques to output a sparse or group-sparse
explanation which are prior conditions applicable to a wide range of deep
learning applications. We empirically evaluate the proposed MoreauGrad scheme
on standard computer vision datasets, showing the qualitative and quantitative
success of the MoreauGrad approach in comparison to standard gradient-based
interpretation methods. | Jingwei Zhang, Farzan Farnia | 2023-01-08T11:28:28Z | http://arxiv.org/abs/2302.05294v1 | # MoreauGrad: Sparse and Robust Interpretation of Neural Networks via Moreau Envelope
###### Abstract
Explaining the predictions of deep neural nets has been a topic of great interest in the computer vision literature. While several gradient-based interpretation schemes have been proposed to reveal the influential variables in a neural net's prediction, standard gradient-based interpretation frameworks have been commonly observed to lack robustness to input perturbations and flexibility for incorporating prior knowledge of sparsity and group-sparsity structures. In this work, we propose MoreauGrad1 as an interpretation scheme based on the classifier neural net's Moreau envelope. We demonstrate that MoreauGrad results in a smooth and robust interpretation of a multi-layer neural network and can be efficiently computed through first-order optimization methods. Furthermore, we show that MoreauGrad can be naturally combined with \(L_{1}\)-norm regularization techniques to output a sparse or group-sparse explanation which are prior conditions applicable to a wide range of deep learning applications. We empirically evaluate the proposed MoreauGrad scheme on standard computer vision datasets, showing the qualitative and quantitative success of the MoreauGrad approach in comparison to standard gradient-based interpretation methods.
Footnote 1: The paper’s code is available at [https://github.com/buyeah1109/MoreauGrad](https://github.com/buyeah1109/MoreauGrad)
## 1 Introduction
Deep neural networks (DNNs) have achieved state-of-the-art performance in many computer vision problems including image classification [1], object detection [2], and medical image analysis [3]. While they manage to attain super-human scores on standard image and speech recognition tasks, a reliable application of deep learning models to real-world problems requires an interpretation of their predictions to help domain experts understand and investigate the basis of their predictions. Over the past few years, developing and analyzing interpretation schemes that reveal the influential features in a neural network's prediction have attracted great interest in the computer vision community.
A standard approach for interpreting neural nets' predictions is to analyze the gradient of their prediction score function at or around an input data point. Such gradient-based interpretation mechanisms result in a feature saliency map revealing the influential variables that locally affect the neural net's assigned prediction score. Three well-known examples of gradient-based interpretation schemes are the simple gradient [4], integrated gradients [5], and DeepLIFT [6] methods. While the mentioned methods have found many applications in explaining neural nets' predictions, they have been observed to lack robustness to input perturbations and to output a dense noisy saliency map in their application to computer vision datasets [7, 8]. Consequently, these gradient-based explanations can be considerably altered by minor random or adversarial input noise.
A widely-used approach to improve the robustness and sharpness of gradient-based interpretations is SmoothGrad [9] which applies Gaussian smoothing to the mentioned gradient-based interpretation methods. As shown by [9], SmoothGrad can significantly boost the visual quality of a neural net's gradient-based
saliency map. On the other hand, SmoothGrad typically leads to a dense interpretation vector and remains inflexible to incorporate prior knowledge of sparsity and group-sparsity structures. Since a sparse saliency map is an applicable assumption to several image classification problems where a relatively small group of input variables can completely determine the image label, a counterpart of SmoothGrad which can simultaneously achieve sparse and robust interpretation will be useful in computer vision applications.
In this paper, we propose a novel approach, which we call _MoreauGrad_, to achieve a provably smooth gradient-based interpretation with potential sparsity or group-sparsity properties. The proposed Moreau-Grad outputs the gradient of a classifier's Moreau envelope which is a useful optimization tool for enforcing smoothness in a target function. We leverage convex analysis to show that MoreauGrad behaves smoothly around an input sample and therefore provides an alternative optimization-based approach to SmoothGrad for achieving a smoothly-changing saliency map. As a result, we demonstrate that similar to SmoothGrad, MoreauGrad offers robustness to input perturbations, since a norm-bounded perturbation will only lead to a bounded change to the MoreauGrad interpretation.
Next, we show that MoreauGrad can be flexibly combined with \(L_{1}\)-norm-based regularization penalties to output sparse and group-sparse interpretations. Our proposed combinations, Sparse MoreauGrad and Group-Sparse MoreauGrad, take advantage of elastic-net [10] and group-norm [11] penalty terms to enforce sparse and group-sparse saliency maps, respectively. We show that these extensions of MoreauGrad preserve the smoothness and robustness properties of the original MoreauGrad scheme. Therefore, our discussion demonstrates the adaptable nature of MoreauGrad for incorporating prior knowledge of sparsity structures in the output interpretation.
Figure 1: Interpretation of Sparse MoreauGrad (ours) vs. standard gradient-based baselines on an ImageNet sample before and after adding a norm-bounded interpretation adversarial perturbation.
Finally, we present the empirical results of our numerical experiments applying MoreauGrad to standard image recognition datasets and neural net architectures. We compare the numerical performance of MoreauGrad with standard gradient-based interpretation baselines. Our numerical results indicate the satisfactory performance of vanilla and \(L_{1}\)-norm-based MoreauGrad in terms of visual quality and robustness. Figure 1 shows the robustness and sparsity of the Sparse MoreauGrad interpretation applied to an ImageNet sample in comparison to standard gradient-based saliency maps. As this and our other empirical findings suggest, MoreauGrad can outperform standard baselines in terms of the sparsity and robustness properties of the output interpretation. In the following, we summarize the main contributions of this paper:
* Proposing MoreauGrad as an interpretation scheme based on a classifier function's Moreau envelope
* Analyzing the smoothness and robustness properties of MoreauGrad by leveraging convex analysis
* Introducing \(L_{1}\)-regularized Sparse MoreauGrad to obtain an interpretation satisfying prior sparsity conditions
* Providing numerical results supporting MoreauGrad over standard image recognition datasets
## 2 Related Work
**Gradient-based Interpretation.** A large body of related works develop gradient-based interpretation methods. Simonyan et al. [4] propose to calculate the gradient of a classifier's output with respect to an input image. The simple gradient approach in [4] has been improved by several related works. Notably, the method of Integrated Gradients [5] is capable of keeping highly relevant pixels in the saliency map by aggregating gradients of image samples. SmoothGrad [9] removes noise in saliency maps by adding Gaussian-random noise to the input image. The CAM method [12] analyzes the information from global average pooling layer for localization, and Grad-CAM++ [13] improves over Grad-CAM [14] and generates coarse heat-maps with improved multi-object localization. The NormGrad [15] focuses on the weight-based gradient to analyze the contribution of each image region. DeepLIFT [6] uses difference from reference to propagate an attribution signal. However, the mentioned gradient-based methods do not obtain a sparse interpretation, and their proper combination with \(L_{1}\)-regularization to promote sparsity remains highly non-trivial and challenging. On the other hand, our proposed MoreauGrad can be smoothly equipped with \(L_{1}\)-regularization to output sparse interpretations and can further capture group-sparsity structures.
**Mask-based Interpretation.** Mask-based interpretation methods rely on adversarial perturbations to interpret neural nets. Applying a mask which perturbs the neural net input, the importance of input pixels is measured by a masked-based method. This approach to explaining neural nets has been successfully applied in References [16, 17, 18, 19] and has been shown to benefit from dynamic perturbations [20]. More specifically, Dabkowski and Gal [19] introduce a real-time mask-based detection method; Fong and Vedaldi [17] develop a model-agnostic approach with interpretable perturbations; Wagner et al. [16] propose a method that could generate fine-grained visual interpretations. Moreover, Lim et al. [18] leverage local smoothness to enhance their robustness towards samples attacked by PGD [21]. However, [17] and [19] show that perturbation-based interpretation methods are still vulnerable to adversarial perturbations.
We note that the discussed methods depend on optimizing perturbation masks for interpretations, and due to the non-convex nature of neural net loss functions, their interpretation remains sensitive to input perturbations. In contrast, our proposed MoreauGrad can provably smooth the neural net score function, and can adapt to non-convex functions using norm regularization. Hence, MoreauGrad can improve both the sparsity and robustness of the interpretation.
**Robust Interpretation.** The robustness of interpretation methods has been a subject of great interest in the literature. Ghorbani et al. [7] introduce a gradient-based adversarial attack method to alter the neural nets' interpretation. Dombrowski et al. [22] demonstrate that interpretations could be manipulated, and they suggest improving the robustness via smoothing the neural net classifier. Heo et al. [8] propose a manipulation method that is capable of generalizing across datasets. Subramanya et al. [23] create adversarial patches fooling both the classifier and the interpretation.
To improve the robustness, Sparsified-SmoothGrad [24] combines a sparsification technique with Gaussian smoothing to achieve certifiable robustness. The related works [16, 17, 18, 19, 25] discuss the application of adversarial defense methods against classification-based attacks to interpret the prediction of neural net classifiers. We note that these papers' main focus is not on defense schemes against interpretation-based attacks. Specifically, [16] filter gradients internally during backpropogation, and [18] leverage local smoothness to integrate more samples. Unlike the mentioned papers, our work proposes a model-agnostic optimization-based method which is capable of generating simultaneously sparse and robust interpretations.
## 3 Preliminaries
In this section, we review three standard interpretation methods as well as the notation and definitions in the paper.
### Notation and Definitions
In the paper, we use notation \(\mathbf{X}\in\mathbb{R}^{d}\) to denote the feature vector and \(Y\in\{1,\ldots,k\}\) to denote the label of a sample. In addition, \(f_{\mathbf{w}}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{k}\) denotes a neural net classifier with its weights contained in vector \(\mathbf{w}\in\mathcal{W}\) where \(\mathcal{W}\) is the feasible set of the neural net's weights. Here \(f_{\mathbf{w}}\) maps the \(d\)-dimensional input \(\mathbf{x}\) to a \(k\)-dimensional prediction vector containing the likelihood of each of the \(k\) classes in the classification problem. For every class \(c\in\{1,\ldots,k\}\), we use the notation \(f_{\mathbf{w},c}:\mathbb{R}^{d}\rightarrow\mathbb{R}\) to denote the \(c\)-th entry of \(f_{\mathbf{w}}\)'s output which corresponds to class \(c\).
We use \(\|\mathbf{x}\|_{p}\) to denote the \(\ell_{p}\)-norm of input vector \(\mathbf{x}\). Furthermore, we use notation \(\|\mathbf{x}\|_{p,q}\) to denote the \(\ell_{p,q}\)-group-norm of \(\mathbf{x}\) defined in the following equation for given variable subsets \(S_{1},\ldots,S_{t}\subseteq\{1,\ldots,d\}\):
\[\|\mathbf{x}\|_{p,q}=\big{\|}\left[\|\mathbf{x}_{S_{1}}\|_{p},\ldots,\| \mathbf{x}_{S_{t}}\|_{p}\right]\big{\|}_{q} \tag{1}\]
In other words, \(\|\mathbf{x}\|_{p,q}\) is the \(\ell_{q}\)-norm of a vector containing the \(\ell_{p}\)-norms of the subvectors of \(\mathbf{x}\) characterized by index subsets \(S_{1},\ldots,S_{t}\).
### Gradient-based Saliency Maps
In our theoretical and numerical analysis, we consider the following widely-used gradient-based interpretation baselines which apply to a classifier neural net \(f_{\mathbf{w}}\) and predicted class \(c\) for input \(\mathbf{x}\):
1. **Simple Gradient**: The simple gradient interpretation returns the saliency map of a neural net score function's gradient with respect to input \(\mathbf{x}\): \[\mathrm{SG}\big{(}f_{\mathbf{w},c},\mathbf{x}\big{)}\,:=\,\nabla_{\mathbf{x}} f_{\mathbf{w},c}(\mathbf{x}).\] (2) In the applications of the simple gradient approach, \(c\) is commonly chosen as the neural net's predicted label with the maximum prediction score.
2. **Integrated Gradients:** The integrated gradients approach approximates the integral of the neural net's gradient function between a reference point \(\mathbf{x}^{0}\) and the input \(\mathbf{x}\). Using \(m\) intermediate points on the line segment connecting \(\mathbf{x}^{0}\) and \(\mathbf{x}\), the integrated gradient output will be \[\mathrm{IG}\big{(}f_{\mathbf{w},c},\mathbf{x}\big{)}\,:=\,\frac{\Delta\mathbf{ x}}{m}\sum_{i=1}^{m}\nabla_{\mathbf{x}}f_{\mathbf{w},c}\big{(}\mathbf{x}^{0}+ \frac{i}{m}\Delta\mathbf{x}\big{)}.\] (3) In the above \(\Delta\mathbf{x}:=\mathbf{x}-\mathbf{x}^{0}\) denotes the difference between the target and reference points \(\mathbf{x},\mathbf{x}^{0}\).
3. **SmoothGrad:** SmoothGrad considers the averaged simple gradient score over an additive random perturbation \(Z\) drawn according to an isotropic Gaussian distribution \(Z\sim\mathcal{N}(\mathbf{0},\sigma^{2}I_{d})\). In practice, the SmoothGrad interpretation is estimated over a number \(t\) of independently drawn noise vectors \(\mathbf{z}_{1},\ldots,\mathbf{z}_{t}\stackrel{{\text{i.i.d.}}}{{ \sim}}\mathcal{N}(\mathbf{0},\sigma^{2}I_{d})\) according to the zero-mean Gaussian distribution: \[\text{SmoothGrad}\big{(}f_{\mathbf{w},c},\mathbf{x}\big{)}\,:=\mathbb{E}\big{[} \nabla_{\mathbf{x}}f_{\mathbf{w},c}(\mathbf{x}+Z)\big{]}\;\approx\;\frac{1}{t} \sum_{i=1}^{t}\nabla_{\mathbf{x}}f_{\mathbf{w},c}(\mathbf{x}+\mathbf{z}_{i}).\] (4)
## 4 MoreauGrad: An Optimization-based Interpretation Framework
As discussed earlier, smooth classifier functions with a Lipschitz gradient help to obtain a robust explanation of neural nets. Here, we propose an optimization-based smoothing approach based on Moreau-Yosida regularization. To introduce this optimization-based approach, we first define a function's Moreau envelope.
**Definition 1**.: _Given regularization parameter \(\rho>0\), we define the Moreau envelope of a function \(g:\mathbb{R}^{d}\to\mathbb{R}\) as:_
\[g^{\rho}(\mathbf{x})\,:=\,\min_{\widetilde{\mathbf{x}}\in\mathbb{R}^{d}}\;g \big{(}\widetilde{\mathbf{x}}\big{)}+\frac{1}{2\rho}\big{\|}\widetilde{ \mathbf{x}}-\mathbf{x}\big{\|}_{2}^{2}. \tag{5}\]
In the above definition, \(\rho>0\) represents the Moreau-Yosida regularization coefficient. Applying the Moreau envelope, we propose the MoreauGrad interpretation as the gradient of the classifier's Moreau envelope at an input \(\mathbf{x}\).
**Definition 2**.: _Given regularization parameter \(\rho>0\), we define the MoreauGrad interpretation \(\mathrm{MG}_{\rho}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) of a neural net \(f_{\mathbf{w}}\) predicting class \(c\) for input \(\mathbf{x}\) as_
\[\mathrm{MG}_{\rho}(f_{\mathbf{w},c},\mathbf{x})\,:=\,\nabla f_{\mathbf{w},c}^{ \rho}(\mathbf{x}).\]
To compute and analyze the MoreauGrad explanation, we first discuss the optimization-based smoothing enforced by the Moreau envelope. Note that the Moreau envelope is known as an optimization tool to turn non-smooth convex functions (e.g. \(\ell_{1}\)-norm) into smooth functions. Here, we discuss an extension of this result to weakly-convex functions which also apply to non-convex functions.
**Definition 3**.: _A function \(g:\mathbb{R}^{d}\to\mathbb{R}\) is called \(\lambda\)-weakly convex if \(\Phi(\mathbf{x}):=g(\mathbf{x})+\frac{\lambda}{2}\|\mathbf{x}\|_{2}^{2}\) is a convex function, i.e. for every \(\mathbf{x}_{1},\mathbf{x}_{2}\in\mathbb{R}^{d}\) and \(0\leq\alpha\leq 1\) we have:_
\[g\big{(}\alpha\mathbf{x}_{1}+(1-\alpha)\mathbf{x}_{2}\big{)}\;\leq\;\alpha g( \mathbf{x}_{1})+(1-\alpha)g(\mathbf{x}_{2})+\frac{\lambda\alpha(1-\alpha)}{2} \big{\|}\mathbf{x}_{1}-\mathbf{x}_{2}\big{\|}_{2}^{2}.\]
**Theorem 1**.: _Suppose that \(g:\mathbb{R}^{d}\to\mathbb{R}\) is a \(\lambda\)-weakly convex function. Assuming that \(0<\rho<\frac{1}{\lambda}\), the followings hold for the optimization problem of the Moreau envelope \(g^{\rho}\) and the optimal solution \(\widetilde{x}_{\rho}^{*}(\mathbf{x})\) solving the optimization problem:_
1. _The gradients of_ \(g^{\rho}\) _and_ \(g\) _are related as for every_ \(\mathbf{x}\)_:_ \[\nabla g^{\rho}(\mathbf{x})=\nabla g\big{(}\widetilde{x}_{\rho}^{*}(\mathbf{x })\big{)}.\]
2. _The difference_ \(\widetilde{x}_{\rho}^{*}(\mathbf{x})-\mathbf{x}\) _is aligned with_ \(g^{\rho}\)_'s gradient:_ \[\nabla g^{\rho}(\mathbf{x})=\frac{-1}{\rho}\big{(}\,\widetilde{x}_{\rho}^{*}( \mathbf{x})-\mathbf{x}\,\big{)}.\]
3. \(g^{\rho}\) _will be_ \(\max\{\frac{1}{\rho},\frac{\lambda}{1-\rho\lambda}\}\)_-smooth, i.e. for every_ \(\mathbf{x}_{1},\mathbf{x}_{2}\)_:_ \[\big{\|}\nabla g^{\rho}(\mathbf{x}_{1})-\nabla g^{\rho}(\mathbf{x}_{2})\big{\|} _{2}\,\leq\,\frac{1}{\min\big{\{}\rho,\frac{1}{\lambda}-\rho\big{\}}}\big{\|} \mathbf{x}_{1}-\mathbf{x}_{2}\big{\|}_{2}.\]
Proof.: This theorem is known for convex functions. In the Appendix, we provide another proof for the result.
**Corollary 1**.: _Assume that the prediction score function \(f_{\mathbf{w},c}:\mathbb{R}^{d}\to\mathbb{R}\) is \(\lambda\)-weakly convex. Then, the MoreauGrad interpretation \(\mathrm{MG}_{\rho}\) will remain robust under an \(\epsilon\)-\(\ell_{2}\)-norm bounded perturbation \(\|\mathbf{\delta}\|_{2}\leq\epsilon\) as_
\[\left\|\mathrm{MG}_{\rho}(\mathbf{x}+\mathbf{\delta})-\mathrm{MG}_{\rho}(\mathbf{ x})\right\|_{2}\leq\frac{\epsilon}{\min\bigl{\{}\rho,\frac{1}{\lambda}-\rho \bigr{\}}}.\]
The above results imply that by choosing a small enough coefficient \(\rho\) the Moreau envelope will be a differentiable smooth function. Moreover, the computation of the Moreau envelope will reduce to a convex optimization task that can be solved by standard or accelerated gradient descent with global convergence guarantees. Therefore, one can efficiently compute the MoreauGrad interpretation by solving the optimization problem via the gradient descent algorithm. Algorithm 1 applies gradient descent to compute the solution to the Moreau envelope optimization which according to Theorem 1 yields the MoreauGrad explanation.
As discussed above, MoreauGrad will be provably robust as long as the regularization coefficient will dominate the weakly-convexity degree of the prediction score. In the following proposition, we show this condition can be enforced by applying either Gaussian smoothing.
**Proposition 1**.: _Suppose that \(f_{\mathbf{w},c}\) is \(L\)-Lipschitz, that is for every \(\mathbf{x}_{1},\mathbf{x}_{2}\mid f_{\mathbf{w},c}(\mathbf{x}_{1})-f_{ \mathbf{w},c}(\mathbf{x}_{2})|\leq L\|\mathbf{x}_{2}-\mathbf{x}_{1}\|_{2}\), but could be potentially non-differentiable and non-smooth. Then, \(h_{\mathbf{w},c}(\mathbf{x}):=\mathbb{E}[f_{\mathbf{w},c}(\mathbf{x}+\mathbf{ Z})]\) where \(\mathbf{Z}\sim\mathcal{N}(\mathbf{0},\sigma^{2}I_{d\times d})\) will be \(\frac{L\sqrt{d}}{\sigma}\)-weakly convex._
Proof.: We postpone the proof to the Appendix.
The above proposition suggests the regularized MoreauGrad which regularizes the neural net function to satisfy the weakly-convex condition through Gaussian smoothing.
## 5 Sparse and Group-Sparse MoreauGrad
To further extend the MoreauGrad approach to output sparsely-structured feature saliency maps, we further include an \(L_{1}\)-norm-based penalty term in the Moreau-Yosida regularization and define the following \(L_{1}\)-norm-based sparse and group-sparse Moreau envelope.
**Definition 4**.: _For a function \(g:\mathbb{R}^{d}\to\mathbb{R}\) and regularization coefficients \(\rho,\eta>0\), we define \(L_{1}\)-Moreau envelope \(g_{L_{1}}^{\rho,\eta}\):_
\[g_{L_{1}}^{\rho,\eta}(\mathbf{x})\,:=\min_{\widetilde{\mathbf{x}}\in\mathbb{R }^{d}}\,g(\widetilde{\mathbf{x}})+\frac{1}{2\rho}\bigl{\|}\widetilde{\mathbf{ x}}-\mathbf{x}\bigr{\|}_{2}^{2}+\eta\bigl{\|}\widetilde{\mathbf{x}}- \mathbf{x}\bigr{\|}_{1}.\]
_We also define \(L_{2,1}\)-Moreau envelope \(g_{L_{2,1}}^{\rho,\eta}\) as_
\[g_{L_{2,1}}^{\rho,\eta}(\mathbf{x})\,:=\,\min_{\widetilde{\mathbf{x}}\in \mathbb{R}^{d}}\,g(\widetilde{\mathbf{x}})+\frac{1}{2\rho}\bigl{\|}\widetilde{ \mathbf{x}}-\mathbf{x}\bigr{\|}_{2}^{2}+\eta\bigl{\|}\widetilde{\mathbf{x}}- \mathbf{x}\bigr{\|}_{2,1}.\]
_In the above, the group norm \(\|\cdot\|_{2,1}\) is defined as \(\|\mathbf{x}\|_{2,1}:=\sum_{i=1}^{t}\|\mathbf{x}_{S_{i}}\|_{2}\) for given subsets \(S_{1},\ldots,S_{t}\subseteq\{1,\ldots,d\}\)._
**Definition 5**.: _Given regularization coefficients \(\rho,\eta>0\), we define the Sparse MoreauGrad (\(\mathrm{S-MG}_{\rho,\eta}\)) and Group-Sparse MoreauGrad (\(\mathrm{GS-MG}_{\rho,\eta}\)) interpretations as_
\[\mathrm{S-MG}_{\rho,\eta}(f_{\mathbf{w},c},\mathbf{x})\,:= \frac{1}{\rho}\bigl{(}\,\widetilde{\mathbf{x}}_{L_{1}}^{*}(\mathbf{x})- \mathbf{x}\,\bigr{)},\] \[\mathrm{GS-MG}_{\rho,\eta}(f_{\mathbf{w},c},\mathbf{x})\,:= \frac{1}{\rho}\bigl{(}\,\widetilde{\mathbf{x}}_{L_{2,1}}^{*}(\mathbf{x})- \mathbf{x}\,\bigr{)},\]
_where \(\widetilde{\mathbf{x}}_{L_{1}}^{*}(\mathbf{x}),\,\widetilde{\mathbf{x}}_{L_{2,1}}^{*}(\mathbf{x})\) denote the optimal solutions to the optimization tasks of \(f_{\mathbf{w},c,L_{1}}^{\rho,\eta}(\mathbf{x}),\,f_{\mathbf{w},c,L_{2,1}}^{ \rho,\eta}(\mathbf{x})\), respectively._
In Theorem 2, we extend the shown results for the standard Moreau envelope to our proposed \(L_{1}\)-norm-based extensions of the Moreau envelope. Here, we use \(\text{ST}_{\alpha}\) and \(\text{GST}_{\alpha}\) to denote sparse and group-sparse soft-thresholding functions defined entry-wise and group-entry-wise as
\[\text{ST}_{\alpha}(\mathbf{x})_{i} :=\begin{cases}0&\text{ if }|x_{i}|\leq\alpha\\ x_{i}-\text{sign}(x_{i})\alpha&\text{ if }|x_{i}|>\alpha,\end{cases}\] \[\text{GST}_{\alpha}(\mathbf{x})_{S_{i}} :=\begin{cases}\mathbf{0}&\text{ if }\|\mathbf{x}_{S_{i}}\|_{2}\leq \alpha\\ \big{(}1-\frac{\alpha}{\|\mathbf{x}_{S_{i}}\|_{2}}\big{)}\mathbf{x}_{S_{i}}& \text{ if }\|\mathbf{x}_{S_{i}}\|_{2}>\alpha.\end{cases}\]
**Theorem 2**.: _Suppose that \(g:\mathbb{R}^{d}\to\mathbb{R}\) is a \(\lambda\)-weakly convex function. Then, assuming that \(0<\rho<\frac{1}{\lambda}\), Theorem 1's parts 1 and 3 will further hold for the sparse Moreau envelope \(g_{L_{1}}^{\rho,\eta}\) and group-sparse Moreau envelope \(g_{L_{2,1}}^{\rho,\eta}\) and their optimization problems' optimal solutions \(\widetilde{\mathbf{x}}_{\rho,\eta,L_{1}}^{*}(\mathbf{x})\) and \(\widetilde{\mathbf{x}}_{\rho,\eta,L_{2,1}}^{*}(\mathbf{x})\). To parallel Theorem 1's part 2 for \(L_{1}\)-Moreau envelope, the followings hold_
\[\text{ST}_{\rho\eta}\big{(}\!-\!\rho\nabla g_{L_{1}}^{\rho,\eta}( \mathbf{x})\big{)} = \widetilde{\mathbf{x}}_{\rho,\eta,L_{1}}^{*}(\mathbf{x})- \mathbf{x},\] \[\text{GST}_{\rho\eta}\big{(}\!-\!\rho\nabla g_{L_{2,1}}^{\rho, \eta}(\mathbf{x})\big{)} = \widetilde{\mathbf{x}}_{\rho,\eta,L_{2,1}}^{*}(\mathbf{x})- \mathbf{x}.\]
Proof.: We defer the proof to the Appendix.
**Corollary 2**.: _Suppose that the prediction score function \(f_{\mathbf{w},c}\) is \(\lambda\)-weakly convex. Assuming that \(0<\rho<\frac{1}{\lambda}\), the Sparse MoreauGrad \(\text{S-MG}_{\rho,\eta}\) and Group-Sparse MoreauGrad \(\text{GS-MG}_{\rho,\eta}\) interpretations will be robust to every norm-bounded perturbation \(\|\boldsymbol{\delta}\|_{2}\leq\epsilon\) as:_
\[\big{\|}\text{GS-MG}_{\rho,\eta}(\mathbf{x}+\boldsymbol{\delta}) -\text{GS-MG}_{\rho,\eta}(\mathbf{x})\big{\|}_{2} \leq \frac{\epsilon}{\min\big{\{}\rho,\frac{1}{\lambda}-\rho\big{\}}}.\]
To compute the Sparse and Group-Sparse MoreauGrad, we propose applying the proximal gradient descent algorithm as described in Algorithm 1. Note that Algorithm 1 applies the soft-thresholding function as the proximal operator for the \(L_{1}\)-norm function present in Sparse MoreauGrad.
```
Input: data \(\mathbf{x}\), label \(c\), classifier \(f_{\mathbf{w}}\), regulatization coeff. \(\rho\), stepsize \(\gamma\), noise std. parameter \(\sigma\), number of updates \(T\) Initialize\(\mathbf{x}^{(0)}=\mathbf{x}\), for\(t=0,\ldots,T\)do ifRegularized Modethen Draw noise vectors \(\mathbf{z}_{1},\ldots,\mathbf{z}_{m}\sim\mathcal{N}(\mathbf{0},\sigma^{2}I_{d \times d})\) Compute\(\mathbf{g}_{t}=\frac{1}{m}\sum_{i=1}^{m}\nabla f_{\mathbf{w},c}(\mathbf{x}^{(t)}+ \mathbf{z}_{i})\) else Compute\(\mathbf{g}_{t}=\nabla f_{\mathbf{w},c}(\mathbf{x}^{(t)})\) end Update\(\mathbf{x}^{(t+1)}\leftarrow(1-\frac{\gamma}{\rho})\mathbf{x}^{(t)}-\gamma(\mathbf{g}_{t}- \frac{1}{\rho}\mathbf{x})\) ifSparse Modethen Update\(\mathbf{x}^{(t+1)}\leftarrow\text{SoftThreshold}_{\gamma\eta}\big{(}\mathbf{x}^{(t+1)}- \mathbf{x}\big{)}+\mathbf{x}\) end Output\(\text{MG}(\mathbf{x})=\frac{1}{\rho}\big{(}\mathbf{x}^{(T)}-\mathbf{x}\big{)}\)
```
**Algorithm 1** MoreauGrad Interpretation
## 6 Numerical Results
We conduct several numerical experiments to evaluate the performance of the proposed MoreauGrad. Our designed experiments focus on the smoothness, sparsity, and robustness properties of MoreauGrad interpretation maps as well as the feature maps of several standard baselines. In the following, we first describe the numerical setup in our experiments and then present the obtained numerical results on the qualitative and quantitative performance of interpretation methods.
### Experiment Setup
In our numerical evaluation, we use the following standard image datasets: CIFAR-10 [26] consisting of 60,000 labeled samples with 10 different labels (50,000 training samples and 10,000 test samples), and ImageNet-1K [27] including 1.4 million labeled samples with 1,000 labels (10,000 test samples and 1.34 million training samples). For CIFAR-10 experiments, we trained a standard ResNet-18 [28] neural network with the softplus activation. For ImageNet experiments, we used an EfficientNet-b0 network [29] pre-trained on the ImageNet training data. In our experiments, we compared the MoreauGrad schemes with the following baselines: 1) the simple gradient [4], 2) Integrated Gradients [14], 3) DeepLIFT [6], 4) SmoothGrad [9], 5) Sparsified SmoothGrad [24], 6) RelEx [18]. We note that for baseline experiments we adopted the official implementations and conducted the experiments with hyperparameters suggested in their work.
Figure 3: Visualization of Sparse MoreauGrad with various coefficient \(\eta\)’s. \(\eta=0\) is Vanilla MoreauGrad.
Figure 2: Visualization of MoreauGrad with various coefficient \(\rho\)’s. \(\rho=0\) is Simple Gradient.
### Effect of Smoothness and Sparsity Parameters
We ran the numerical experiments for unregularized Vanilla MoreauGrad with multiple smoothness coefficient \(\rho\) values to show the effect of the Moreau envelope's regularization. Figure 2 visualizes the effect of different \(\rho\) on the Vanilla MoreauGrad saliency map. As can be seen in this figure, the saliency map qualitatively improves by increasing the value of \(\rho\) from 0 to 1. Please note that for \(\rho=0\), the MoreauGrad simplifies to the simple gradient interpretation. However, as shown in Theorem 1 the proper performance of Vanilla MoreauGrad requires choosing a properly bounded \(\rho\) value, which is consistent with our observation that when \(\rho\) becomes too large, the Moreau envelope will be computationally difficult to optimize and the quality of interpretation maps could deteriorate to some extent. As numerically verified in both CIFAR-10 and ImageNet experiments, we used the rule of thumb \(\rho=\frac{1}{\sqrt{\mathbb{E}[\|\mathbf{X}\|_{2}]}}\) measured over the empirical training data to set the value of \(\rho\), which is equal to 1 for the normalized samples in our experiments.
Regarding the sparsity hyperparameter \(\eta\) in Sparse and Group-Sparse MoreauGrad experiments, we ran several experimental tests to properly tune the hyperparameter. Note that a greater coefficient \(\eta\) enforces more strict sparsity or group-sparsity in the MoreauGrad interpretation, and the degree of sparsity could be simply adjusted by changing this coefficient \(\eta\). As shown in Figure 3, in our experiments with different \(\eta\) coefficients the interpretation map becomes sparser as we increase the \(L_{1}\)-norm penalty coefficient \(\eta\). Similarly, to achieve a group-sparse interpretation, we used \(L_{2,1}\)-regularization on groups of adjacent pixels as discussed in Definition 4. The effect of the group-sparsity coefficient was similar to the sparse case in our experiments, as fewer pixel groups took non-zero values and the output interpretations showed more structured interpretation maps when choosing a larger coefficient \(\eta\). The results with different group-sparsity hyperparameters are demonstrated in Figure 4.
### Qualitative Comparison of MoreauGrad vs. Gradient-based Baselines
In Figure 5, we illustrate the Sparse, and Group-Sparse MoreauGrad interpretation outputs as well as the saliency maps generated by the gradient-based baselines. The results demonstrate that MoreauGrad generates qualitatively sharp and, in the case of Sparse and Group-Sparse MoreauGrad, sparse interpretation maps. As shown in Figure 5, promoting sparsity in the MoreauGrad interpretation maps has improved the visual quality, and managed to erase the less relevant pixels like the background ones. Additionally, in the case of Group-Sparse MoreauGrad, the maps exhibit both sparsity and connectivity of selected pixels.
Figure 4: Visualization of Group-Sparse MoreauGrad maps with various coefficient \(\eta\)’s.
Figure 5: Qualitative comparison between Sparse, Group-Sparse MoreauGrad and the baselines.
### Robustness
We qualitatively and quantitatively evaluated the robustness of MoreauGrad interpretation. To assess the empirical robustness of interpretation methods, we adopt a \(L_{2}\)-bounded interpretation attack method defined by [24]. Also, for quantifying the empirical robustness, we adopt three robustness metrics. The first metric is the Euclidean distance of the normalized interpretations before and after the attack:
\[D(I(\mathbf{x}),I(\mathbf{x}^{\prime}))=\big{\|}\frac{I(\mathbf{x})}{\|I( \mathbf{x})\|_{2}}-\frac{I(\mathbf{x}^{\prime})}{\|I(\mathbf{x}^{\prime})\|_ {2}}\big{\|}_{2} \tag{6}\]
Note that a larger distance between the normalized maps indicates a smaller similarity and a higher vulnerability of the interpretation method to adversarial attacks.
The second metric is the top-k intersection ratio. This metric is another standard robustness measure used in [7, 24]. This metric measures the ratio of pixels that remain salient after the interpretation attack. A robust interpretation is expected to preserve most of the salient pixels under an attack. The third metric is the structural similarity index measure (SSIM) [30]. A larger SSIM value indicates that the two input maps are more perceptively similar.
Using the above metrics, we compared the MoreauGrad schemes with the baseline methods. As qualitatively shown in Figure 7, using the same attack magnitude, the MoreauGrad interpretations are mostly similar
Figure 6: Quantitative robustness comparison between MoreauGrad and the baselines.
Figure 7: Visualization of robustness against interpretation attacks. The top and bottom rows show original and attacked maps.
before and after the norm-bounded attack. The qualitative robustness of MoreauGrad seems satisfactory compared to the baseline methods. Finally, Figure 6 presents a quantitative comparison of the robustness measures for the baselines and proposed MoreauGrad on CIFAR-10, Tiny-ImageNet, and ImageNet datasets. As shown by these measures, MoreauGrad outperforms the baselines in terms of the robustness metrics.
## 7 Conclusion
In this work, we introduced MoreauGrad as an optimization-based interpretation method for deep neural networks. We demonstrated that MoreauGrad can be flexibly combined with \(L_{1}\)-regularization methods to output sparse and group-sparse interpretations. We further showed that the MoreauGrad output will enjoy robustness against input perturbations. While our analysis focuses on the sparsity and robustness of the MoreauGrad explanation, studying the consistency and transferability of MoreauGrad interpretations is an interesting future direction. Moreover, the application of MoreauGrad to convex and norm-regularized neural nets could be another topic for future study. Finally, our analysis of \(\ell_{1}\)-norm-based Moreau envelope could find independent applications to other deep learning problems.
|
2303.03280 | Graph Neural Network Autoencoders for Efficient Quantum Circuit
Optimisation | Reinforcement learning (RL) is a promising method for quantum circuit
optimisation. However, the state space that has to be explored by an RL agent
is extremely large when considering all the possibilities in which a quantum
circuit can be transformed through local rewrite operations. This state space
explosion slows down the learning of RL-based optimisation strategies. We
present for the first time how to use graph neural network (GNN) autoencoders
for the optimisation of quantum circuits. We construct directed acyclic graphs
from the quantum circuits, encode the graphs and use the encodings to represent
RL states. We illustrate our proof of concept implementation on
Bernstein-Vazirani circuits and, from preliminary results, we conclude that our
autoencoder approach: a) maintains the optimality of the original RL method; b)
reduces by 20 \% the size of the table that encodes the learned optimisation
strategy. Our method is the first realistic first step towards very large scale
RL quantum circuit optimisation. | Ioana Moflic, Vikas Garg, Alexandru Paler | 2023-03-06T16:51:30Z | http://arxiv.org/abs/2303.03280v1 | # Graph Neural Network Autoencoders for Efficient Quantum Circuit Optimisation
###### Abstract
Reinforcement learning (RL) is a promising method for quantum circuit optimisation. However, the state space that has to be explored by an RL agent is extremely large when considering all the possibilities in which a quantum circuit can be transformed through local rewrite operations. This state space explosion slows down the learning of RL-based optimisation strategies. We present for the first time how to use graph neural network (GNN) autoencoders for the optimisation of quantum circuits. We construct directed acyclic graphs from the quantum circuits, encode the graphs and use the encodings to represent RL states. We illustrate our proof of concept implementation on Bernstein-Vazirani circuits and, from preliminary results, we conclude that our autoencoder approach: a) maintains the optimality of the original RL method; b) reduces by 20 % the size of the table that encodes the learned optimisation strategy. Our method is the first realistic first step towards very large scale RL quantum circuit optimisation.
## I Introduction
Scalable optimisation methods for quantum circuits are an open problem. The NISQ generation of quantum computers, even when operating on thousands of qubits, will not be fully error-corrected such that structural properties of the compiled circuits (e.g. depth, number of gates) play a major role in estimating the failure-rate of the entire computation. In particular, deeper circuits have a higher failure rate. Without using quantum error-correction or error-mitigation, one should compile aggressively the circuits to reduce their depth, and then use error mitigation methods (e.g. [1; 2]).
Consequently, compilation scalability is necessary for achieving the fault-tolerant execution of the first large scale quantum computation. The technical road maps project that quantum computers will operate thousands of qubits within the next few years, but there is a gap between the speed of the current generation of quantum circuit compilers and the speed required for circuits operating on thousands of qubits.
### Motivation
Machine learning techniques have started being applied to quantum circuit compilation and optimisation (e.g. [3; 4; 5]). The general approach is to invest large amounts of computational power into the training of models that can then be used for fast and efficient quantum circuit compilation.
Machine learning techniques are increasingly successfully applied for classical circuit design automation[6]. Reinforcement Learning (RL) for the compilation of quantum gate sequences is presented in [7]. At very large scale, RL has been successfully applied for classical chip design [8]. Small scale compilation of quantum circuits is shown in [7; 9] and the potential of RL for quantum circuit compiler optimisation has been illustrated by [10]. Large scale applications of RL with respect to quantum circuits have not been demonstrated by now, and our work is a first step towards scalable quantum circuit optimisation.
This paper is organised as follows: Section I.2-2.3 are introducing the background necessary for presenting the method which we detail in Section II. Therein, we focus on the data structure necessary for training the autoencoder that is afterwards embedded into RL. Finally, we present preliminary results collected by evaluation our implementation on benchmarking circuits.
### Quantum Circuit Optimisation
Quantum circuit optimisation using template-based rewrite rules [11] is widely used in quantum circuit software (e.g. Google Cirq [12], IBM Qiskit [13]). An in
Figure 1: The learning loop of the RL agent with embedded autoencoder for the representation of the environment observation. At each step of the RL algorithm, the agent chooses an action \(A_{T}\) to apply and is given the reward \(R_{T+1}\) and the encoding \(AE(S_{T+1})\) of the transformed circuit.
put circuit is gradually transformed by applying quantum gate identities (Fig. 2) until a given optimisation criterion is met. The gate set and the size of the input circuits influence the performance of the procedure. The number of permitted transformations blows up the size of the optimisation search space. Consequently, although this kind of optimisation performs well, it is challenging to improve its scaling.
### Reinforcement Learning
Quantum circuit optimisation can be framed as a RL problem. Given the quantum circuit and a range of templates to be applied, an agent would learn an optimal policy in a trial and error approach. The circuit represents a fully observable environment and applicable templates at a given time step are the actions. The agent is selecting actions from an action space, which is formed of circuit rewrite rules (templates) as illustrated in Fig. 2. Each action transforms a quantum circuit into a functionally equivalent, but structurally different quantum circuit. The structure of the circuit at a given time step is expressed by an observation of the environment.
The state space in RL is formed by all the states encountered by the agent during training and all the actions that allowed the agent to transition between those states form the action space. In canonical RL, the mapping between states and actions is stored in a table, also called Q-Table, whereas in deep RL the mapping is learnt by a machine learning model.
The Q-Table is effectively the encoding of the optimisation algorithm that the agent learned during training. The size of the Q-Table increases faster at the beginning of learning, when the agent is _exploring_ the environment. The Q-Table's size is increasing slower towards the end of the learning: the agent will _exploit_ the knowledge it accumulated.
### Graph Neural Networks
Graph neural networks (GNN) [14; 15] are a type of neural networks capable of performing machine learning tasks on graph-structured data. GNNs rely on message passing between the nodes of the graph, where messages which are in the form of vectors are updated using neural networks. At each message passing step, nodes aggregate the information received from the other nodes in proximity. This type of neural network is commonly used for link prediction [16], community detection [17], graph classification [18], etc. Graph Autoencoders are autoencoders which use GNNs for the encoder and the decoder components of an autoencoder.
Variational autoencoders for directed acyclic graphs (D-VAE) [19], are effective in finding an encoding for DAGs into the latent space of the autoencoder. Grammar Variational Autoencoders [20], as an example of D-VAEs, encode and decode molecules into and from a continuous space, and by searching in that specific space, valid, optimised molecule forms can be found. D-VAEs are also useful for learning distributions of approximate, lossy circuit representations. In the context of RL, the goal is to find a lossy state representation of the RL environment that is precise enough such that the RL-agent can differentiate between the consequences of its actions and at the same time, sufficiently imprecise in order to be meaningful for more than one RL-state.
### Contributions
This work is the first to present GNN autoencoders [19] for speeding up the RL optimisation of quantum circuits. We achieve a more efficient (compact) RL optimisation compressing the Q-Table using a GNN autoencoder [21].
We use autoencoders to describe quantum circuits using a probabilistic encoding. Instead of building a deterministic function which outputs a unique encoding for each circuit, we are using the encoder to describe a probability distribution for each snapshot of the RL environment's state. encoded representations.
Embedding autoencoders into the RL procedure is a recent method [22; 23] which has not been explored, to the best of our knowledge, for quantum circuit optimisation. Our approach is promising in the context of RL-optimisation of quantum circuits: a) dimensionality reduction of the Q-table [24]; b) improving convergence time by finding an efficient encoding while maintaining a good performance after decoding.
Figure 2: a) Unoptimized and optimized Bernstein-Vazirani circuit; b) Two Hadamard gates cancelling; c) Two CNOT gates cancelling; d) Parallelizing CNOTs sharing the same control qubit; e) reversing the direction of a CNOT using Hadamard gates.
We have implemented our method (Section III) and obtained empirical evidence about the scalability and efficiency of our method. We demonstrate scalability by training our RL agent on Bernstein-Vazirani circuits operating on 2-5 qubits.
## II Methods
We present the methods we used to implement the workflow illustrated in Fig. 1. Without loss of generality, we restrict the quantum circuits we are optimising to ICM+H circuits. ICM circuits [25] are related to measurement-based graph-state quantum circuits [26]. ICM circuits are computationally universal and consist of single qubit initialisation, CNOT gates, single qubit measurements. In order to implement the correct computation, ICM circuits will also rely on classical feedback. ICM+H circuits are ICM circuits which include single qubit Hadamard gates, too.
Herein we use a limited number of templates, circuit rewrite rules, as illustrated in Fig. 2b-e). For benchmarking purposes (see the Results section), we will use Bernstein-Vazirani (BV) circuits (Fig. 2a). The BV circuits have practically the ICM+H form. BV circuits have a known optimum depth of three, which can be achieved if the templates from Fig. 2 are used in an optimal order.
In the following, we discuss how our RL framework is operating, how to obtain lossy circuit representations with an autoencoder, and the method to include the autoencoder into the training of RL.
### Reinforcement Learning with a Lossy Representation
Our RL method is based on Q-Learning, which is a model-free algorithm [27]. The RL environment is explored by an agent, which chooses an action (circuit template) to apply at each step. There is a reward associated with each action, and the value of the reward is reflecting the environment's response to the action. We use relatively small learning environments (compared to computer games where reinforcement learning has been successfully applied), but there still exists a combinatorial explosion in the number of possibilities how the templates can be applied.
Each of the agent's actions is transforming the structure of the circuit. Assuming that \(g_{i}\) is the circuit before applying the template \(t\), and that \(g_{o}\) is the circuit afterwards, we can define the states \(\int(g_{i})\) and \(\int(g_{o})\). The RL agent will encode in the QTable the transition \(\int(g_{i})\xrightarrow{\mathbb{1}}\int(g_{o})\).
The \(\int\) function is used for identifying circuits. For example, \(f_{s}\) might be the character string representation of the circuit's QASM gate list (e.g. for two CNOTs in the circuit "cx 0 1, cx 1 0"). The character string representation can identify each encountered circuit uniquely, and this can be a disadvantage because of the very large number of encountered states and the size of the resulting QTable. States are added to the table if these did not exist beforehand. For example, assuming that \(\int(g_{i})\) did exist, and that \(\int(g_{o})\) is novel, then the number of encountered states in the table increases by one - and this can be the case after each application of a template.
We limit the growth of the QTable size by using lossy versions of \(\int\). A lossy representation may seem inefficient, but it has the advantage of abstracting circuits into classes, where a class denotes a set of circuits with similar structure. This is a powerful property because a specific optimisation template is very likely to be beneficial for similar circuits. For example, assuming that \(g_{i}\) and \(g_{o}\) have the same meaning as in the example above, but that \(\int(g_{i})==\int(g_{o})\), then the number of encountered states would not increase. This is only to say that a particular template \(t\) leaves, from the perspective of \(\int\), the circuits unchanged: \(g_{i}\) and \(g_{o}\) belong to the same circuit class.
### Formalising DAGs for the Encoder
Quantum circuits can be represented as directed acyclic graphs (DAGs). The goal is to treat each quantum circuit as a data point and to encode the point into a latent distribution of the autoencoder (see next section). To this end, we use graph neural networks (GNN) [14] which are fed with formally correct DAG representations of quantum circuits. Fig. 3 illustrates the three additional node types (trgt_op, ctrl_op, helper) we use in order to have DAGs which are correct from the perspective of quantum circuit representations.
### Training the Autoencoder
DAGs have a dependency structure, and one can analyse them as a single computation formed by the topological sorting of the nodes. Our D-VAE employs an asynchronous message passing scheme to injectively encode the DAG's computation [19]. For encoding, we use the information about the DAG's node types (input, output, control, target, hadamard, helper), and the edges existing between the nodes.
For each node of the quantum circuit's DAG, we use a Gated Recurrent Unit (GRU) [28] to compute a corresponding hidden state. The latter is a function of the hidden states collected from the node's neighbours. The encoding process guarantees two properties [19]: 1) two isomorphic DAGs have the same encoding; b) encoding does not differentiate between two circuits \(g_{1}\) and \(g_{2}\) as long as they represent the same computation. The D-VAE finds a single vector encoding for the two similar graphs. We are using this property for representing similar RL states.
The decoder uses the same asynchronous message passing scheme as the encoder, but in reverse. The decoder reconstructs a generated DAG node by node, after sampling a node type distribution and an edge probability distribution from the autoencoder's latent space.
The D-VAE is trained by backpropagating through the GRU's parameters the partial derivatives of a loss function \(\mathcal{L}\). In the equation below, \(\alpha\) and \(\gamma\) are scaling factors, \(g\) is the original DAG that is encoded, \(g^{\prime}\) is the decoded DAG, \(\mathcal{R}(g\),\(g^{\prime})\) is the reconstruction error function which uses binary cross-entropy between the node type and edge probability distributions of \(g\) and \(g^{\prime}\), and \(\mathcal{E}(e\),\(g^{\prime})\) is the edge edit distance between \(g\) and \(g^{\prime}\) :
\[\mathcal{L}(g\text{,}g^{\prime})\text{=}\alpha\mathcal{R}(g\text{,}g^{\prime}) \text{+}\gamma\mathcal{E}(g\text{,}g^{\prime})\]
## III Results
We implemented our method using Google Cirq [12] (to apply the template rewrite rules), IBM Qiskit [13] (to convert a quantum circuit to a DAG), OpenAI Gym[29] (the RL engine) and a custom variant of the variational autoencoder presented in [19]. The autoencoder is used within the RL workflow to compute the representation of the encountered RL states.
We compare the effectiveness and speed of our method with a vanilla RL Q-Learning implementation, and use the following benchmarking procedure. We train an RL agent, called \(R_{s}\), with a non-lossy circuit representation (e.g. character string). After the training we collect all the states from the QTable and use these for training an GNN autoencoder. Let \(l_{s}\) be the number of states from this QTable. We repeat the training of an RL agent, called \(R_{a}\) from scratch, but this time using the autoencoder and obtain an \(l_{a}\) number of states in the QTable. We consider that the autoencoder resulted in a more _effective_ QTable representation if \(l_{a}\text{<}l_{s}\).
We used Bernstein-Vazirani circuits for benchmarking purposes. These circuits are advantageous because they have a known global optimum: when all the CNOTs are parallelised and the total depth is three. The goal of the RL method is to learn the strategy that generates the minimum depth circuit after parallelising the CNOT gates (Fig. 2a). This type of CNOT gate parallelism is compatible with surface code error-corrected quantum circuits [30] implemented by braiding or lattice surgery. In order to build an intuition of the our results, Fig. 4 illustrates the application of our method for the optimisation of a simple circuit.
Figure 4: Example of how a 2-qubit Bernstein Vazirani circuit might be optimized. \(C_{i}\) are the intermediate states of the circuit obtained by applying the templates from Fig.2. The transitions annotated on the arrows, indicate if a template was applied directly (e.g. H, for cancelling two Hadamards), or in reverse (e.g. H*, for inserting two Hadamards next to each other). A template is applied on a set of qubits (e.g. 1,2) or on all qubits (all). Depending on the choice of the function \(f\), this sequence will be encoded into different QTables – two possibilities are illustrated in Fig.5.
Figure 5: Two different RL state transitions obtained by using different state encodings for the circuit transformations from Fig.4. a) using an exact character string representation there are four states; b) a lossy encoding, such as the one from the autoencoder, might determine that the representations of C2 and C3 are the same, and the that there are two possibilities from starting from C1 into any of those two states.
Figure 3: Every quantum circuit is a DAG, but not every DAG is a valid quantum circuit. Issues arise when sequences of CNOT gates have to be represented as DAGs: there have to be different graph nodes for control and target, or to annotate the edges. We choose the first option. a) A DAG where two CNOTS (cx) are applied on the same qubits (q[0] and q[1]). b) Introducing two distinct node types (trgt_op and ctrl_op) in order to differentiate between the CNOT gate orientations. In order to maintain the property that each quantum circuit gate operate has an equal number of input and output qubits, the DAG nodes have to have an equal number of predecessor and successor nodes – we introduce a _fake wire_ (q[2]) that is connecting the control and target nodes of the same CNOT gate. c) An additional node type is used (helper) in order to make clear which wire is the _fake_ when a target and a control node operate on the same pair of wires.
The compression of the QTable might influence the effectiveness of the circuit optimisation. For this reason, we compare the quality of the optimised circuits after using both \(R_{s}\) and \(R_{a}\). We conclude that embedding the autoencoder into the RL procedure is effective, because, for our benchmarking circuits, \(R_{a}\), as well as \(R_{l}\), reach the global optimum. Fig. 6 presents an example how the achieved circuit depth evolves during the training of an RL agent when using the novel encoded representation.
We are evaluating the compression factor achieved when using the autoencoder. TableI contains preliminary results and shows that the mean decrease of the Q-Table size is around 20% when using the autoencoder.
We conclude, based on our proof-of-concept implementation and from the preliminary data, that our approach achieves a practical compression while not sacrificing the circuit optimization (in Fig.6) both agents reach the same optimum.
## IV Conclusion
We presented a method for scaling the RL optimisation of quantum circuits. We use a graph neural network autoencoder for obtained compressed representations of the circuits encountered during the training of the RL agent. Preliminary results show that the autoencoder compresses by approximately 20% the number of RL states. The compression does not affect the performance of the RL agent: it reaches the same optimal quantum circuits. Future work will focus on improving the compression, the optimization performance and applying this method to very large scale circuits.
## Acknowledgements
Ioana Moflic and Alexandru Paler were with funding from the Defense Advanced Research Projects Agency [under the Quantum Benchmarking (QB) program under award no. HR00112230007 and HR001121S0026 contracts]. The views, opinions and/or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.
|
2302.09484 | Gradient-based Wang-Landau Algorithm: A Novel Sampler for Output
Distribution of Neural Networks over the Input Space | The output distribution of a neural network (NN) over the entire input space
captures the complete input-output mapping relationship, offering insights
toward a more comprehensive NN understanding. Exhaustive enumeration or
traditional Monte Carlo methods for the entire input space can exhibit
impractical sampling time, especially for high-dimensional inputs. To make such
difficult sampling computationally feasible, in this paper, we propose a novel
Gradient-based Wang-Landau (GWL) sampler. We first draw the connection between
the output distribution of a NN and the density of states (DOS) of a physical
system. Then, we renovate the classic sampler for the DOS problem, the
Wang-Landau algorithm, by replacing its random proposals with gradient-based
Monte Carlo proposals. This way, our GWL sampler investigates the
under-explored subsets of the input space much more efficiently. Extensive
experiments have verified the accuracy of the output distribution generated by
GWL and also showcased several interesting findings - for example, in a binary
image classification task, both CNN and ResNet mapped the majority of human
unrecognizable images to very negative logit values. | Weitang Liu, Ying-Wai Li, Yi-Zhuang You, Jingbo Shang | 2023-02-19T05:42:30Z | http://arxiv.org/abs/2302.09484v2 | # Gradient-based Wang-Landau Algorithm: A Novel Sampler for
###### Abstract
The output distribution of a neural network (NN) over the _entire input space_ captures the complete input-output mapping relationship, offering insights toward a more comprehensive NN understanding. Exhaustive enumeration or traditional Monte Carlo methods for the entire input space can exhibit impractical sampling time, especially for high-dimensional inputs. To make such difficult sampling computationally feasible, in this paper, we propose a novel Gradient-based Wang-Landau (GWL) sampler. We first draw the connection between the output distribution of a NN and the density of states (DOS) of a physical system. Then, we renovate the classic sampler for the DOS problem, Wang-Landau algorithm, by replacing its random proposals with gradient-based Monte Carlo proposals. This way, our GWL sampler investigates the under-explored subsets of the input space much more efficiently. Extensive experiments have verified the accuracy of the output distribution generated by GWL and also show-cased several interesting findings -- for example, in a binary image classification task, both CNN and ResNet mapped the majority of human unrecognizable images to very negative logit values.
Machine Learning, ICML, Deep Learning, ICML
## 1 Introduction
The input-output mapping relationship of a trained neural network (NN) is the key to understand a trained NN. Existing works measure the accuracy of a NN based on such mapping relations over (pre-defined) _subsets_ of the input space, such as in-distribution subsets (Dosovitskiy et al., 2021; Tolstikhin et al., 2021; Steiner et al., 2021; Chen et al., 2021; Zhuang et al., 2022; He et al., 2015), out-of-distribution (OOD) subsets (Liu et al., 2020; Hendrycks and Gimpel, 2016; Hendrycks et al., 2019; Hsu et al., 2020; Lee et al., 2017, 2018), and adversarial subsets (Szegedy et al., 2013; Rozsa et al., 2016; Miyato et al., 2018; Kurakin et al., 2016).
Given the recent trend of applying NNs to open-world, non-IID applications (Cao et al., 2022; Sun and Li, 2022), we argue that it is crucial to obtain the complete _output distribution_ of a trained NN over the _entire input space_. This output distribution can offer a complete picture about the number of inputs mapped to certain output values. Note that the entire input space here includes all kinds of inputs mentioned above and even _human unrecognizable_ inputs (see Figure 2(a)). As a pilot study, we focus on binary classification -- given a trained binary NN classifier, we aim to sample the entire input space to obtain the output distribution, i.e., a histogram that counts the number of input samples mapped to certain logit values, as shown in Fig 2(b). The sampling procedure would also offer more fine-grained information as side products, such as representative input samples corresponding to a certain range of output values.
A straightforward solution is exhaustive enumeration or traditional Monte Carlo methods (Chen et al., 2014; Welling and Teh, 2011; Li et al., 2016; Xu et al., 2018). However, the sampling time would become impractical, or the sampler could get stuck in a subset of input space, especially for high-dimensional inputs. To overcome these issues, in this paper, we propose a novel sampler called Gradient-based Wang-Landau (GWL) sampling as follows.
Figure 1: The energy density of states (DOS) of a physical system _vs._ the output distribution of a deep neural network.
We first connect the output distribution of a NN to the _density of states_ (DOS) of a physical system through an analogy between the system energy and neural network output, as shown in Figure 1. From the physics point of view, the input \(\mathbf{x}\) to the neural network can be viewed as the configuration \(\mathbf{x}\) of the system; the neural network output (e.g., logit values in binary classifier) \(y(\mathbf{x})\) corresponds to the energy function \(E(\mathbf{x})\); the output distribution of a NN is then analogous to the DOS of a physical system, which is the number of configurations corresponding to the same energy value. The log scale of the DOS is the microcanonical entropy associated with the energy, \(S(E(\mathbf{x}))\).
Our new sampler GWL is a novel renovation of the classic sampler for the DOS problem, Wang-Landau algorithm (Wang & Landau, 2001), where we replace its random proposals with gradient-based Monte Carlo proposals. Given the overwhelming number of human unrecognizable inputs in the entire input space, if one adopts the traditional Monte Carlo proposal in the Wang-Landau algorithm, i.e., by changing pixel values at random, the sampling process is likely to get stuck in this human unrecognizable subset. Thus, we propose to apply a gradient-based proposal following Gibbs-with-Gradients (Grathwohl et al., 2021), which proves to be efficient to propose in-distribution inputs for a trained NN model. This way, our GWL sampler investigates the under-sampled subsets of the input space much more efficiently. The accuracy of GWL has been empirically verified on a small toy dataset -- the output distribution generated by GWL aligns perfectly with the result of exhaustive enumeration.
More importantly, by analyzing the output distribution generated by GWL, we showcase several interesting findings of CNN and ResNet in a binary classification task based on real-world pictures. First, our experiments show that in both CNN and ResNet, the dominant output values are very negative and the vast majority of them correspond to human-unrecognizable input images. This supplies direct evidence to the well-known overconfidence issue in NNs (Nguyen et al., 2015). Second, when we focus on the output values where the in-distribution inputs correspond to, human-unrecognizable inputs still dominate significantly. This result presents significant challenges to the out-of-distribution (OOD) detection problems. Third, we observe a clear background darkness pattern of the representative samples of CNN and ResNet when the output logit value increases, and speculate these models simply utilize such "backdoors" to predict the labels of the digits without truly understanding the semantics of the images.
In summary, we demonstrate that sampling the entire input space to obtain the output distribution of a trained NN is computationally feasible, and it can provide new and interesting insights for future systematic investigation. Our contributions are summarized as follows.
* We tackle the challenging yet important problem to uncover the output distribution of a NN over the entire input space. Such output distribution offers a novel perspective to understand NNs.
* We connect this output distribution to the DOS in physics and successfully renovate the Wang-Landau algorithm using a gradient-based proposal, which is a critical component to sample the entire output space as much as possible, and to improve efficiency.
* We conduct extensive experiments on toy and real-world datasets to confirm the accuracy of our proposed sampler.
* GWL sampler allows for detailed investigation of the input-output mapping of NNs, facilitating further studies systematically.
## 2 Problem definition
In the traditional setting, binary neural classifiers model the class distribution through logit \(z\). A neural classifier parameterized by \(\theta\) learns \(p_{\theta}(z|\mathbf{x})=\delta(z-y_{\theta}(\mathbf{x}))\) through a function \(y_{\theta}:\mathbf{x}\to z\in\mathbb{R}\), where \(\mathbf{x}\in\Omega\), \(\Omega\subseteq\{0,...,N\}^{D}\) for images, and \(\delta\) is the Dirac delta function. \(\Omega\) aligns with Gibbs-With-Gradient's setting to be discrete.
The above model does not define the distribution of the data \(\mathbf{x}\). This work aims to obtain the output value distribution of
Figure 2: Input types and the example output distribution for binary classification between digits 0 and 1. The entire input space covers all possible gray-scale images of the same shape. \(y\) is the output (logit) with respect to input \(\mathbf{x}\).
binary classifiers in the entire input space: \(\Omega=\{0,...,N\}^{D}\). Here we assume that the input follows a uniform distribution \(\mu(\mathbf{x})\) over the domain \(\Omega\) of \(\mathbf{x}\). We define the joint distribution
\[p_{\theta}(z,\mathbf{x})=p_{\theta}(z|\mathbf{x})\mu(\mathbf{x}).\]
Our goal is to obtain the logit (output) distribution \(p_{\theta}(z)\), which can be obtained by marginalizing the joint distribution over the input space \(\Omega\):
\[p_{\theta}(z)=\sum_{\Omega}p_{\theta}(z|\mathbf{x})\mu(\mathbf{x}).\]
To sample from the distribution \(p_{\theta}(z)\), we can first sample \(\mathbf{x}_{i}\sim\text{Uniform}(\Omega)\), then condition on the sampled \(\mathbf{x}_{i}\) to obtain \(z_{i}\sim p_{\theta}(z|\mathbf{x}_{i})\). While a uniform sampler in principle can solve this problem, it can take an impractically long time to converge.
## 3 Method
In this section, we discuss the connection between our problem to the density of states (DOS), introduce both Wang-Landau algorithm and the Gibbs-With-Gradient proposal method as a background, and present our new sampler Gradient-Wang-Landau (GWL) algorithm.
### Connection to Density of States in Physics
In statistical physics, given the energy function \(E:\mathbf{x}\rightarrow\mathcal{E}\in\mathbb{R}\), the DOS \(\rho(\mathcal{E})\) is defined as
\[\rho(\mathcal{E})=\sum_{\mathbf{x}\in\Omega}\delta(\mathcal{E}-E(\mathbf{x})),\]
where \(\delta\) is the Dirac delta function and \(\Omega\) is the domain of \(\mathbf{x}\) where \(\mathbf{x}\) is valid. The DOS can be viewed as a probability distribution in the energy space; its log-probability defines the entropy \(S\):
\[S(\mathcal{E})=\ln(\rho(\mathcal{E})).\]
Boltzmann constant is taken to be \(1\) in our setting. DOS is meaningful because many physical quantities depend on energy or its integration but not the specific input \(\mathbf{x}\).
We associate the neural network output distribution to DOS in physics by making an analogy between the system energy \(\mathcal{E}=E(\mathbf{x})\) and NN output \(z=y(\mathbf{x})\). This connection is based on the observation that the energy function in physics maps an input configuration to a scalar-valued energy; similarly, a binary neural classifier maps an image to a logit. Both the logit and energy are treated as the direct output of the mapping. Other quantities, such as the loss, are derived from the output. The desired output distribution can be obtained similarly as sampling the DOS in physics, which is the count of the configurations given an energy value. The output distribution and DOS are both defined in the entire input space.
### Traditional Samplers Are Not Directly Applicable
Traditional Monte Carlo (MC) samplers (Chen et al., 2014; Welling and Teh, 2011; Li et al., 2016; Xu et al., 2018), in principle, could be applied to sample the output distribution, but they would not be efficient to our study. This is because these algorithms bias the sampler to the more probable domain based on importance sampling. Consequently, a major drawback is that the sampler is easily "stuck" in some localized distributions as it is hard for the sampler to overcome the barriers to visit all the possible configurations (or input images in the NN case). This limitation is particularly severe when sampling from multi-modal distributions. Our problem setting, however, not only requires the sampler to sample from a multi-modal distribution. More importantly, the target distribution \(S\) is _unknown_ upfront and the generated samples have to cover the whole output space. Using traditional MC samplers, in the best case scenario, would take an unreasonable time to converge. In the more critical but likely scenario, there is a high risk of obtaining samples that do not truly represent the underlying distribution.
### Wang-Landau algorithm and Gibbs-With-Gradient
**Wang-Landau (WL) algorithm** was originally designed to determine the DOS \(\rho(\mathcal{E})\) of a physical system (Wang and Landau, 2001), when the DOS is not known _a priori_ and would be determined on-the-fly. It is therefore a suitable tool for estimating the true distribution of our NN output as it is also unknown before the sampling. WL uses a histogram to store the instantaneous estimation \(\tilde{S}\). WL improves the sampling efficiency by using the inverted distribution as the sampling weight \(w(\mathbf{x})\):
\[w(\mathbf{x})\propto\exp(-\tilde{S}(E(\mathbf{x}))).\]
The instantaneous entropy \(\tilde{S}\) is updated iteratively until convergence. At the end of the simulation, when the estimation of the entropy approaches the true value \(S(\mathcal{E})\), the sampler would sample the entire output space uniformly.
Previous work on the sampling of a complex physics system has shown that with the same number of MC steps, WL was able to successfully produce the correct distribution \(S\) when the traditional Metropolis MC sampling fail (Li et al., 2012). This is because WL can overcome energy barriers by accumulating the counts of visits and uses their inverse as sampling biases, a mechanism that traditional MC samplers are missing.
The **Gibbs-With-Gradients (GWG)** method is used for energy-based models (EBM) by sampling
\[\log p(\mathbf{x})=f(\mathbf{x})-\log Z,\]
where \(f(\mathbf{x})\) is the unnormalized log-probability, \(Z\) is the partition function, and \(\mathbf{x}\) is discrete. Typical Gibbs sampler
iterates every dimension \(x_{i}\) of \(\mathbf{x}\), computes the conditional probability \(p(x_{i}|x_{1},...x_{i-1},x_{i+1},...,x_{D})\), and samples according to this conditional probability.
When the training data \(\mathbf{x}\) are natural images and the EBM learns \(\mathbf{x}\) decently well, the traditional Gibbs sampler wastes much of the computation. For example, most pixel-by-pixel iterations over \(x_{i}\) in MNIST dataset will be on the black background. GWG proposes a smart proposal that picks the pixel \(x_{i}\) that is more likely to change, such as the pixels around the edge between the bright and dark region of the digits.
### Wang-Landau with Gradient Proposal
Directly applying WL algorithm with random proposals is insufficient to sample the output space efficiently, because a trained neural model learns a preferred mapping through the loss function. For example, a binary classifier maps the training inputs to either the sufficiently positive or negative logit values, which ideally should correspond to the extremely rare but semantically meaningful inputs. After the sampler explores and generates the peak centered at 0 where most random samples correspond to (Fig. 2(b)), it is almost impossible for the sampler with a random proposal to propose an input with meaningful structure (or even in-distribution inputs) so that the other possible output values are explored. Of course, whether those output values correspond to in-distribution inputs is only confirmable after sampling. In summary, it is extremely difficult for the random proposal in WL algorithm to explore all the possible output values.
We therefore propose to use the Wang-Landau algorithm framework but replace the MC proposal with the one in Gibbs-With-Gradients (GWG) sampler. GWG has a gradient proposal that takes advantage of the model's learned weights to propose inputs. In order to sample the distribution of the output prediction through GWG, we define log-probability \(f(\mathbf{x})\) as:
\[f(\mathbf{x})=S(y(\mathbf{x})),\]
where \(S\) is the count in log scale for the bin corresponding to \(y(\mathbf{x})\). The fixed \(f(\cdot)\) in the original GWG is now changing in our sampling process given the input \(\mathbf{x}\), since the expression for \(S\) is unknown and we can only estimate the output distribution from using WL algorithm. GWG requires the gradient of \(f\), but since \(S\) is approximated using discrete bins, we apply a first-order differentiable interpolation for taking the derivative of the discrete histogram of entropy.
Similar to the original WL algorithm, we first initialize two histograms with all of their bins set to 0. One of these histograms is for estimating entropy \(S\), and the other histogram, \(H\), is a counter of how many times the sampler has visited a specific bin. \(H\) is also used for checking if all the bins have been visited roughly equally, i.e., a flatness check. We first preset the number of iterations that the sampling will perform, as well as a modification factor \(f_{m}\) that is used to update the estimation of entropy \(S\) iteratively. At each MC step,we interpolate \(S\) to get a differentiable interpolation, take the derivative of the negation of \(S\) with respect to the output \(z\) and then the inputs \(\mathbf{x}\) using chain rule. GWG uses this gradient to propose the next input that is likely to have a _lower_ entropy and be accepted by the sampler. The newly proposed input sample is then accepted or rejected according the acceptance probability:
\[A(\mathbf{x}\rightarrow\mathbf{x}^{\prime})=\min(1,e^{S_{\mathbf{x}}-S_{ \mathbf{x}^{\prime}}}\frac{q(\mathbf{x}|\mathbf{x}^{\prime})}{q(\mathbf{x}^{ \prime}|\mathbf{x})})\]
where \(\mathbf{q}\) is the proposal distribution. When a proposal is accepted, the entropy \(S\) of the corresponding output value is updated using the modification factor \(f_{m}\). Otherwise the \(S\) of the "old" output value will be updated. This sampling procedure repeats until the histogram \(H\) passes the flatness check. The sampler then enters the next iteration with the counters in \(H\) reset to 0, \(\ln f_{m}\) reduced by half, but the \(S\) histogram kept for further accumulation. This sampling procedure drives the sampler to visit rare samples whose logit values correspond to the lower entropy, while providing an estimation of entropy \(S\) as a result at the end. This proposed algorithm is provided in Alg. 1 in Appendix.
## 4 Related Works and Discussions
**Performance Characterization** has long been explored even before the era of deep learning (Haralick, 1992; Klette et al., 2000; Thacker et al., 2008). The input-output relationship has been explored for simple functions (Hammitt and Bartlett, 1995) and mathematical morphological operators (Gao et al., 2002; Kanungo and Haralick, 1990). Compared to existing performance characterization approaches (Ramesh et al., 1997; Bowyer and Phillips, 1998; Aghdasi, 1994; Ramesh and Haralick, 1992; 1994), our work focuses on the output distribution (Greiffenhagen et al., 2001) of a neural network over the entire input space (i.e., not task specific) following the blackbox approach (Courtney et al., 1997; Cho et al., 1997) where the system transfer function from input to output is unknown. Our setting shall be viewed as the most general forward uncertainty quantification case (Lee and Chen, 2009) where the model performance is characterized when the inputs are perturbed (Roberts et al., 2021). To our best knowledge, we demonstrate for the first time that the challenging task of sampling the entire input space for modern neural networks is feasible and efficient by drawing the connection between neural network and physics models. Our proposed method can offer samples to be further integrated with the performance characterization methods mentioned above.
**Density Estimation and Energy Landscape Mapping** Previous works in density estimation focus on data density (Tabak and Turner, 2013; Liu et al., 2021), where class samples are given and the goal is to estimate the density of samples. Here we are not interested in the density of the given dataset, but the density of all the valid samples in the pixel space for a trained model. (Hill et al., 2019; Barbu and Zhu, 2020) have done the pioneering work in sampling the energy landscape for energy-based models. Their methods specifically focus on the local minimum and barriers of the energy landscape. We can relax the requirement and generalize the mapping on the "output" space where either sufficiently positive or sufficiently negative output (logit) values are meaningful in binary classifiers and other models.
**Open-world Model Evaluation** Though many neural models have achieved the SOTA performance, most of them are only on in-distribution test sets (Dosovitskiy et al., 2021; Tolstikhin et al., 2021; Steiner et al., 2021; Chen et al., 2021; Zhuang et al., 2022; He et al., 2015; Simonyan and Zisserman, 2014; Szegedy et al., 2015; Huang et al., 2017; Zagoruyko and Komodakis, 2016). Open-world settings where the test set distribution differs from the in-distribution training set create special challenges for the model. While the models have to detect the OOD samples from in-distribution samples (Liu et al., 2020; Hendrycks and Gimpel, 2016; Hendrycks et al., 2019; Hsu et al., 2020; Lee et al., 2017, 2018; Liang et al., 2018; Mohseni et al., 2020; Ren et al., 2019), we also expect sometimes the model could generalize what it learns to OOD datasets (Cao et al., 2022; Sun and Li, 2022). It has been discovered that models have over-confident predictions for some OOD samples that obviously do not align with human judgments (Nguyen et al., 2015). The OOD generalization becomes more challenging because of this discovery, because the models may not be as reliable as we thought they were. Adversarial test sets (Szegedy et al., 2013; Rozsa et al., 2016; Miyato et al., 2018; Kurakin et al., 2016; Xie et al., 2019; Madry et al., 2017) also present special challenges as models decisions are different from those of humans. Having a full view of input-output relation with all the above different kinds of test sets under consideration is important.
**Samplers** MCMC samplers (Chen et al., 2014; Welling and Teh, 2011; Li et al., 2016; Xu et al., 2018) are developed to scale to big datasets and sample efficient with gradients. Recently, Gibbs-With-Gradients (GWG) (Grathwohl et al., 2021) is proposed to pick the promising pixel(s) as the proposal. To further improve sampling efficiency, CS-GLD (Deng et al., 2020) drives the sampler to explore the under-explored energy using similar idea as Wang-Landau algorithm (Wang and Landau, 2001). The important difference between our problem setting and the previous ones solved by other MCMC samplers is the function or model as distribution to be sampled from is unknown. Wang-Landau algorithm utilizes previous approximation of the distribution to drive the sampler to explore the under-explored energy regions. This algorithm can be more efficient through parallelization (Vogel et al., 2013; Cunha-Netto et al., 2008), assumption about continuity in output space (Junghans et al., 2014; Li and Eisenbach, 2017) and extension to multi-dimensional outputs (Zhou et al., 2006). While the previous samplers can be applied to high-dimensional inputs, the energy functions in physics are relative simple and symmetric. However, modern neural networks are complex and hard to characterize performance (Roberts et al., 2021). We assume agnostic of the output properties of the model and thus apply the Wang-Landau algorithm to sample the entropy as a function of energy but with the gradient proposal in GWG to make the sampler more efficient. Similar to GWG, our sampler can propose the inputs corresponding to the under-explored regions of outputs. Improvements of efficiency can benefit from a patch of pixel changes.
## 5 Experiments
In this section, we apply our proposed Gradient Wang-Landau sampler to inspect a few neural network models and present the discovered output histogram together with representative samples. The dataset and model training details are introduced in Sec. 5.1. We first empirically confirm our sampler performance through a toy example in Sec. 5.2. We then discuss results for modern binary classifiers in Sec. 5.3 and Sec. 5.4. Hyperparameters of the samplers tested in are Appendix C.
### Datasets, Models, and Other Experiment Settings
**Datasets** As aforementioned, we focus on binary classification. Therefore, we derive two datasets from the MNIST datasets by only including samples with labels \(\{0,1\}\). The training and test splits are the same as those in the original MNIST dataset.
* **Toy** is a simple dataset with \(5\times 5\) binary input images we construct. It is designed to make feasible the brute-force enumeration over the entire input space (only \(2^{5\times 5}\) different samples). We center crop the MNIST samples from \(\{0,1\}\) classes and resize them to \(5\times 5\) images. We compute the average of the pixel values and use the average as the threshold to binarize the images -- the pixel value lower than this threshold becomes \(0\); otherwise, it becomes \(1\). The duplicates are not removed for accuracy after resizing since PyTorch does not find duplicate row indices.
* **MNIST-0/1** is an MNIST dataset whose samples only have the 0,1 labels. To align with the GWG setting, the inputs are discrete and not Z-normalized. Therefore, in this dataset, the input \(\mathbf{x}\) is \(28\times 28\) dimensional with discrete pixel values from \(\{0,...,255\}\).
Neural Network Models for EvaluationSince the focus of this paper is not to compare different neural architectures, given the relatively small datasets we have, we train two types of models, a simple CNN and **ResNet-18**He et al. (2015). Each pixel of the inputs is first transformed to the one-hot encoding and passed to a 3-by-3 convolution layer with 3 channel output. The **CNN** model contains 2 convolution layers with 3-by-3 filter size. The output channels are 32 and 128. The final features are average-pooled and passed to a fully-connected layer for the binary classification.
Please keep in mind that our goal in this experiment section is to showcase that our proposed sampler can uncover some novel interesting empirical insights for neural network models. Models with different architectures, weights due to different initialization, optimization, and/or datasets will lead to different results. Therefore, our results and discussions are all _model-specific_. Specifically, we train a simple CNN model to classify the \(5\times 5\) binary images in the Toy dataset (**CNN-Toy**). The test accuracy of this CNN-Toy model reaches \(99.7\%\), which is almost perfect. We train a simple CNN model to classify the \(28\times 28\) grey-scale images in the MNIST-0/1 dataset (**CNN-MNIST-0/1**). The test accuracy of CNN-MNIST-0/1 model is \(97.8\%\). We train a ResNet-18 model to classify the \(28\times 28\) grey-scale images in the MNIST-0/1 dataset (**ResNet-18-MNIST-0/1**). The test accuracy of ResNet-18-MNIST-0/1 model is \(100\%\).
Sampling Methods for ComparisonWe compare several different sampling methods (including our proposed method) to obtain the output histogram over the entire input space.
* **Enumeration** generates the histogram by enumerating all the possible pixel values as inputs. This is a rather slow but the most accurate method.
* **In-dist Test Samples** generates the histogram of the inputs based on the fixed test set.This is commonly used in machine learning evaluation. It is based on a very small and potentially biased subset of the entire input space.
* Wang-Landau algorithm (**WL**) generates the histogram the Wang-Landau algorithm with the random proposal. Specifically, we randomly pick one pixel at a time and change it to any valid (discrete) value as in this implementation 1. Footnote 1: [https://www.physics.rutgers.edu/~haule/681/src_MC/python_codes/wangLand.py](https://www.physics.rutgers.edu/~haule/681/src_MC/python_codes/wangLand.py)
* Gradient Wang-Landau (**GWL**) generates the histogram by our proposed sampler of Wang-Landau algorithm with gradient proposal.
### Results of CNN-Toy
Given the CNN-Toy model, we apply Enumeration, GWL, and In-dist Test Samples to obtain the output entropy histograms, as shown in Fig. 3. Note that our GWL method samples the relative entropy of different energy values as duplicate \(\mathbf{x}\) may be proposed. After normalization with the maximum entropy, the GWL histogram almost exactly matches the Enumeration histogram which is the ground truth histogram. This confirms the accuracy of our GWL sampler and we can apply it further to more complicated models with confidence.
Remarkably, this histogram is quite different from the expectation we presented in Fig. 2(b) -- this histogram is even not centered at \(0\) or has the expected subdominant peaks on both the positive and negative sides. Instead, the dominant peak is so wide that it covers almost the entire spectrum of the possible output values. From a coarse-grained overview, most of the samples are mapped to the center of logit \(-5\) with a decay from \(-5\) to both sides in the CNN-Toy model. This shows the CNN-Toy model is biased to predict more samples to the negative logit values.
In Fig. 3, we also present the representative samples obtained by GWL given different logit values in the CNN-Toy model. Our conjectured analysis of the representative samples are in Appendix B. From this example, one can see that the output histogram over the entire input space can offer a comprehensive understanding of the neural network models, helping researchers better understand critical questions such as the distribution of the outputs, where the model maps the samples to, and what the representative samples with high likelihood are.
### Results of CNN-MNIST-0/1
Entropy Histogram from GWLThe application of GWL on the CNN-Toy model is encouraging. Now we apply GWL to the CNN-MNIST-0/1 that is trained on a real-world dataset. The results from the \(5^{\text{th}}\) iteration are shown in Fig. 4. As our GWL reveals, the output histogram of CNN-MNIST-0/1, similar to CNN-Toy's histogram, does not have the subdominant peaks. It is also different from the presumed case in Fig. 2(b). Compared with the output histogram of the CNN-Toy model (i.e., Fig. 3), for the CNN
Figure 3: Output histograms of CNN-Toy obtained by different sampling methods. The in-distribution samples are only a very small portion in the output histogram. We also present the representative samples obtained by GWL given different logit values.
MNIST-0/1 case, the peak is on the negative boundary and the histogram is skewed towards the negative logit values. \(S\) monotonically decreases as the logit values go from negative to positive. While the in-distribution samples have logit values between \(-20\) and \(12\) as we expect, these samples are exponentially (i.e., \(e^{2000}\) at logit value -20 to \(e^{5500}\) at logit value 18, thousands in log scale) less often found than the majority samples whose logit values are around \(-55\). From a fine-grained view, the CNN-MNIST-0/1 model tends to map the human-unrecognizable samples to the very negative logit values. While previous work Nguyen et al. (2015) showed the existence of the overconfident prediction samples, our result shows a rough but quantitative performance of this CNN which can serve as a baseline for further improvements.
GWL is much more efficient than WLWe first confirm the correctness of our WL sampler on a \(16\times 16\) Ising model and apply it to this CNN model. WL takes a much longer time to converge and we are not able to obtain the converged results. Both WL and GWL cannot have more than 1 worker writing to the same set of DOS bins or else incorrect DOS will be resulted Yin and Landau (2012). For comparison, we inspect the intermediate \(S\) results of the GWL and WL samplers, as shown in Fig. 5. As one can see from Fig. 5(a), GWL is already able to explore the logit values efficiently from the most dominant output value around \(-55\) to the positive logit values in the first iteration. Within only two iterations (Fig. 5(b)), GWL can discover the output histogram covering the value range from \(-55\) to \(18\). On the other hand, as presented in Fig. 5(c), the original WL can only explore the output ranges from around \(-55\) to \(-53\) for 60,000,000 steps (around 10 days without much substantial progress). WL converges significantly slower and never ends in a reasonable time. This result indicates that the GWL converges much faster than the original WL and is able to explore a much wider range of output values.
Manual inspection on more representative samplesAs show in Fig. 4, for the CNN-MNIST-0/1 model, GWL can effectively sample input images from logit values ranging from -55 to 18. We further group these logit values per 5 unit of logit value in \(S\). For every group, we sample 200 representative input images. To make sure they are not correlated, we sample every 50000 pixel changes. For demonstration purposes, we randomly pick 10-out-of-200 samples from every group in Fig. 7(a) in Appendix. We manually inspect the sufficiently positive group (e.g., the last column in Fig. 7(a)) and the sufficiently negative groups (e.g., the first five columns in Fig. 7(a)), and there are no human recognizable samples of digits. We also observe an interesting pattern that as the logit value increases, more and more representative samples have black background. This result suggests that the CNN-MNIST-0/1 model may heavily rely on the background to classify the images Xiao et al. (2020). We conjecture that is because the samples in the most dominant peak are closer to class 0 samples than class 1 samples and this is supported by experimental results (see Appendix. D). More rigorous experiments to a definite conclusion is yet required as future work. In summary, although CNN-MNIST-0/1 holds a very high in-distribution test accuracy, it is far from a robust model because it does not truly understand the semantic structure of the digits.
DiscussionFig. 4 presents challenges to the OOD detection methods that may be more model-dependent than we thought before. If the model cannot map most of the human unrecognizable samples with high uncertainty, the likelihood-based OOD detection methods Liu et al. (2020); Hendrycks and Gimpel (2016) cannot perform well for samples in the entire input space. Fig. 7(a) shows the inputs with the in-distribution output values (output logits of the red plot) of the CNN model may not uniquely correspond to in-distribution samples. More rigorous experiments to a definite conclusion are yet required as future work.
### Results of ResNet-18-MNIST-0/1
Entropy Histogram from GWLWhen applying our GWL samplers to the ResNet-18-MNIST-0/1 model, for \(0^{\text{th}}\) iteration (Fig. 6(a)), we observe that the sampler discovers a wide range of negative logit values from around logit value of -220 to around -33, much wider than that of the CNN's. This range of negative logits, however, does not correspond to human recognizable inputs and there is no obvious pattern observed in contrast to CNN-MNIST-0/1's results. It means the ResNet-18-MNIST-0/1 model makes more confident predictions for some samples than the CNN-MNIST-0/1 model does. Moreover, we observe a cliff around the logit value of -33 and thus we specifically sample the region from -30 to 20 and generate the representative samples in this region where the in-distribution logits fall into. Fig. 6(b) shows the entropy histogram after the \(1^{\text{st}}\) iteration. Some output regions of the in-distribution samples take longer time to discover. This calls for a more efficient sampler in the future.
Figure 4: Output histograms of CNN-MNIST-0/1 obtained by different sampling methods. The blue scale is for GWL and the black scale is for in-distribution test samples. We also present the representative samples obtained by GWL given different logit values (more in Fig. 7(a) in Appendix).
Manual inspection on more representative samplesInterestingly, similar (if not exactly the same) pixel patterns for CNN-MNIST-0/1 model appear, as shown in Fig. 6(b) and Fig. 7(b). The representative samples, however, have broader noisy boundaries compared to those from the CNN-MNIST-0/1 model. The same phenomenon also happens that the double peaks of the test set samples do not align with the output distribution of the entire input space.
Because of the complexity of ResNet-18 over CNN and it takes a longer time to converge, we do not draw conclusions about ResNet-18-MNIST-0/1 evaluation of the entropy difference. Compared with the CNN-MNIST-0/1 model, ResNet-18-MNIST-0/1 has more interesting phenomena for further exploration.
## 6 Conclusion
We aim to get a full picture of the input-output relationship of a model through the inputs valid in the pixel space. We propose to obtain a histogram to estimate the entropy in the output space to better understand the input-output distribution. When the inputs are high-dimensional, enumeration or uniform sampling is either impossible or takes too long to converge. We connect the density of states in physics to this histogram of output entropy. We propose a new, efficient sampler, Wang-Landau sampling with gradient proposals, to achieve this goal. We confirm empirically this can be achieved and uncover some new aspects of neural networks. We observe several limitations. First, though we combine two samplers that have the theoretical guarantee of convergence and confirm the performance of the sampler through empirical results, we do not provide a proof of convergence when they are combined. Second, because of the nature of our problem, we observe that the sampler still takes a decent amount of time to converge, especially for the more complicated network architectures such as ResNet. We avoid making conclusions on the distributions but provide some observations for ResNet. The sampler for ResNet is still converging but it also calls for further development of faster samplers for these more complicated networks. Third, even though the ratio of the recognizable samples can be derived from our sampler, our CNN model maps an enormous amount of samples to the desired output region of the in-distribution inputs, and we do not observe even one human recognizable sample out of the hundreds of representative samples. Future automatic methods can alleviate the need of human labels.
For future work, it is necessary to develop new and more efficient samplers that have theoretical guarantees to acquire this input-output relationship in order to sample with more pixels, such as the ImageNet (Deng et al., 2009). Most importantly, we can then develop new insights into network architectures developed in the last decade for _open-world_ applications using these efficient samplers.
Figure 5: Intermediate output histogram \(S\) per iteration. (a) GWL gradually explores the logit values in the first iteration. (b) GWL discovers the output histogram well within 2 iterations. (c) The original WL explores the output distribution much slower.
Figure 6: Output histograms of ResNet-18-MNIST-0/1 obtained by different sampling methods. There may be a sharp local minima in the output landscape causing a cliff around the logit value of -30. The blue scale is for GWL and the black scale is for in-distribution test samples. We also present the representative samples obtained by GWL given different logit values. (more in Fig. 7(b) in Appendix) |
2305.04107 | DMF-TONN: Direct Mesh-free Topology Optimization using Neural Networks | We propose a direct mesh-free method for performing topology optimization by
integrating a density field approximation neural network with a displacement
field approximation neural network. We show that this direct integration
approach can give comparable results to conventional topology optimization
techniques, with an added advantage of enabling seamless integration with
post-processing software, and a potential of topology optimization with
objectives where meshing and Finite Element Analysis (FEA) may be expensive or
not suitable. Our approach (DMF-TONN) takes in as inputs the boundary
conditions and domain coordinates and finds the optimum density field for
minimizing the loss function of compliance and volume fraction constraint
violation. The mesh-free nature is enabled by a physics-informed displacement
field approximation neural network to solve the linear elasticity partial
differential equation and replace the FEA conventionally used for calculating
the compliance. We show that using a suitable Fourier Features neural network
architecture and hyperparameters, the density field approximation neural
network can learn the weights to represent the optimal density field for the
given domain and boundary conditions, by directly backpropagating the loss
gradient through the displacement field approximation neural network, and
unlike prior work there is no requirement of a sensitivity filter, optimality
criterion method, or a separate training of density network in each topology
optimization iteration. | Aditya Joglekar, Hongrui Chen, Levent Burak Kara | 2023-05-06T18:04:51Z | http://arxiv.org/abs/2305.04107v2 | # DMF-TONN: Direct Mesh-free Topology Optimization using Neural Networks
###### Abstract
We propose a direct mesh-free method for performing topology optimization by integrating a density field approximation neural network with a displacement field approximation neural network. We show that this direct integration approach can give comparable results to conventional topology optimization techniques, with an added advantage of enabling seamless integration with post-processing software, and a potential of topology optimization with objectives where meshing and Finite Element Analysis (FEA) may be expensive or not suitable. Our approach (DMF-TONN) takes in as inputs the boundary conditions and domain coordinates and finds the optimum density field for minimizing the loss function of compliance and volume fraction constraint violation. The mesh-free nature is enabled by a physics-informed displacement field approximation neural network to solve the linear elasticity partial differential equation and replace the FEA conventionally used for calculating the compliance. We show that using a suitable Fourier Features neural network architecture and hyperparameters, the density field approximation neural network can learn the weights to represent the optimal density field for the given domain and boundary conditions, by directly backpropagating the loss gradient through the displacement field approximation neural network, and unlike prior work there is no requirement of a sensitivity filter, optimality criterion method, or a separate training of density network in each topology optimization iteration.
keywords: Topology Optimization, Physics-Informed Neural Network, Implicit Neural Representations, Mesh-free
## 1 Introduction
Topology optimization approaches like SIMP (Solid Isotropic Material with Penalisation) ([1; 2]) find the optimum structure for a given set of boundary conditions by meshing the design domain and using an iterative process where
each iteration involves an FEA calculation for computing the objectives such as compliance. Therefore, removing these iterations completely or creating a new class of solvers with a reparameterization of the design variables in this optimization problem is highly desirable. Advances in neural networks, both in learning from large amounts of data and in learning implicit representations of complex signals show great promise to bring about this transformation, and hence many new approaches trying to utilize neural networks for topology optimization have been recently developed.
Data-driven approaches perform instant optimal topology generation during inference time. However, they require a large training database generation, a long training time and face generalization issues. Online training approaches use a neural network to represent the density field of designs for an alternative parameterization. They do not face any generalization issues. However, meshing and FEA is still required.
One of the first online training topology optimization approaches, TOuNN, was proposed by Chandrasekhar and Suresh [3]. The neural network takes in as inputs the domain coordinates and outputs the density value at each of these coordinates. The loss function consists of the compliance and volume fraction constraint violation. This loss gradient is backpropagated and used for updating the weights of the neural network such that it learns the optimal density distribution for minimizing the loss. The compliance for the density field is calculated as in the traditional SIMP method using FEA. For removing the meshing requirement of FEA and creating a new class of solvers for various partial differential equations (PDEs) in computational mechanics, there have been recent advances and promising results in using physics-informed neural networks (PINNs). Samaniego et al. [4] propose an energy approach for solving the linear elasticity PDEs. The displacement field is parameterized by a neural network which takes as input the domain coordinates and outputs the displacements in each direction at each of these coordinates. The loss function consists of the potential energy, which when minimized, will give static equilibrium displacements.
Though the computational time for the neural network PDE approximation frameworks is worse than the current state of the art FEA solvers, there are several potential advantages of this approach, including the mesh-free nature and an easy modelling of non-linear PDEs. Incorporation of these neural network PDE approximation frameworks in online training topology optimization enables mesh-free topology optimization and a new class of solvers for this complex inverse design problem.
Zehnder et al. [5] were the first to propose such a mesh-free framework for topology optimization with compliance as an objective, where in addition to the density field, the displacement field is also parameterized using a neural network. However, they conclude that connecting the two neural networks directly leads to bad local minima. Hence, they propose using the optimality criterion method and sensitivity filtering for calculating target densities. As such, the density neural network needs to be trained for estimating these target densities in every topology optimization iteration.
In this work, we show that using directly connected displacement field estimation and density field estimation neural networks is indeed an effective approach for mesh-free topology optimization. In particular, we argue that using just one gradient descent step of the density network in each topology optimization iteration without any sensitivity or density filtering leads to comparable results to conventional topology optimization. Moreover, after the initial run of the displacement network, we significantly reduce the number of iterations in each topology optimization iteration. We show that transfer learning applies here and in this high dimensional and non-convex optimization problem setting, approximate loss and gradients can work well.
We devise DMF-TONN as a method not for replacing SIMP, but for adding to and improving the current class of mesh-free solvers for topology optimization, using the advancements in neural networks. We use Fourier Features and a fully connected layer as the architecture of both our neural networks. We verify the effectiveness of our approach with case studies with different boundary conditions and volume fractions. The implementation of this work is available at: [https://github.com/Adityajloglekar/DMF-TONN](https://github.com/Adityajloglekar/DMF-TONN)
Figure 1: Our proposed framework. Each topology optimization iteration consists of: 1) Training the displacement network with current density field, randomly sampled domain coordinates and boundary conditions to obtain static equilibrium displacements. 2) Randomly sampling domain coordinates and performing a forward pass through the density network to obtain current topology output, and displacement network to obtain current compliance, which are passed to the density network loss function 3) Backpropagating density network loss and performing a gradient descent step on density network weights.
## 2 Literature Review
_Topology optimization_: Bendsoe and Kikuchi [6] introduced the homogenization approach for topology optimization. The SIMP method ([1; 2]) considers the relative material density in each element of the Finite Element (FE) mesh as design variables, allowing for a simpler interpretation and optimised designs with more clearly defined features. Other common approaches to topology optimization include the level-set method ([7; 8]) and evolutionary algorithms. Improving the optimization results and speed of these approaches using neural networks has seen a lot of development recently and Woldseth et al. [9] provide an extensive overview on this topic.
_Neural Networks for solving PDEs_: Driven by a neural network's ability to approximate functions, there have been several recent works proposing novel solvers for PDEs. Raissi et al. [10] propose PINNs, neural networks that are trained to solve supervised learning tasks while respecting the laws of physics described by general nonlinear partial differential equations. Samaniego et al. [4] propose an approach using neural networks which does not require labelled data points, and just uses domain coordinates and boundary conditions as input to solve computational mechanics PDEs. Nguyen-Thanh et al. [11] develop a deep energy method for finite deformation hyperelasticity. Sitzmann et al. [12] leverage periodic activation functions for implicit neural representations and demonstrate that these networks are ideally suited for representing complex natural signals and their derivatives and solving PDEs. Tancik et al. [13] show that passing input points through a simple Fourier feature mapping enables a multilayer perceptron to learn high-frequency functions in low-dimensional problem domains. We utilize this concept of Fourier Feature mapping for finding good approximations of the displacement field and density field in the low-dimensional coordinate domain.
_Neural networks for topology optimization_: Several data-driven methods for topology optimization using neural networks [14; 15; 16; 17; 18; 19; 20] have been proposed. In this review we focus on the online training topology optimization methods, i.e. those methods which do not use any prior data, rather train a neural network in a self-supervised manner for learning the optimal density distribution and topology. Chandrasekhar and Suresh [3] explore an online approach where the density field is parameterized using a neural network. Fourier projection based neural network for length scale control ([21]) and application for multi-material topology optimization ([22]) has also been explored. Deng and To [23] propose topology optimization with Deep Representation Learning, with a similar concept of re-parametrization, and demonstrate the effectiveness of their method on compliance minimization and stress-constrained problems. Hoyer et al. [24] use CNNs for density parameterization and directly enforce the constraints in each iteration, reducing the loss function to compliance only. Chen et al. [25] propose a neural network based approach to topology optimization that aims to reduce the use of support structures in additive manufacturing. Chen et al. [26] demonstrate that by using a prior initial field on the unoptimized domain, the efficiency of neural network based topology optimization can
be improved. He et al. [27] and Jeong et al. [28] approximate displacement fields using PINNs, but a continuous density field is not learned and the frameworks are not mesh-free. Lu et al. [29] demonstrate the effectiveness of hard constraints over soft constraints for solving PDEs in various topology optimization problems. Zehnder et al. [5] effectively leverage neural representations in the context of mesh-free topology optimization and use multilayer perceptrons to parameterize both the density and displacement fields. Mai et al. [30] develop a similar approach for optimum design of truss structures. We show that unlike in Zehnder et al. [5], sensitivity filtering, optimality criterion method and separate training of density network in each topology optimization epoch is not necessary for mesh-free topology optimization using neural networks.
## 3 Proposed Method
We parameterize the displacement field as well as the density field using neural networks and integrate them as shown in Figure 1.
```
1:Initialize neural networks: \(Den_{W_{den}}\), \(Disp_{W_{disp}}\)
2:Initialize Adam optimizers: \(Opt_{den}\), \(Opt_{disp}\)
3:Initialize domain \(\rho_{init}\)
4:for\(n_{initdisp}\) iterations do
5: Sample domain coordinates \(X_{disp}\)
6:\(u_{temp}\gets Disp_{W_{disp}}(X_{disp})\)
7:\(W_{disp}\gets Opt_{disp}.step(W_{disp},\frac{\partial L_{disp}(u_{temp },\rho_{init})}{\partial W_{disp}})\)
8:endfor
9:for\(n_{opt}\) iterations do
10:for\(n_{disp}\) iterations do
11: Sample domain coordinates \(X_{disp}\)
12:\(\rho_{temp}\gets Den_{W_{den}}(X_{disp})\)
13:\(u_{temp}\gets Disp_{W_{disp}}(X_{disp})\)
14:\(W_{disp}\gets Opt_{disp}.step(W_{disp},\frac{\partial L_{disp}(u_{temp },\rho_{temp})}{\partial W_{disp}})\)
15:endfor
16: Sample domain coordinates \(X_{den}\)
17:\(\rho\gets Den_{W_{den}}(X_{den})\)
18:\(u\gets Disp_{W_{disp}}(X_{den})\)
19:\(c\gets L_{disp}(u,\rho)+EW\)
20:\(W_{den}\gets Opt_{den}.step(W_{den},\frac{\partial L_{den}(\rho,c)}{ \partial W_{den}})\)
21:endfor
```
**Algorithm 1** DMF-TONN
### Density Neural Network
The density neural network \(\textit{Den}(\textbf{X}_{den})\) can be represented as follows:
\[\textit{Den}(\textbf{X}_{den})=\sigma(\sin(\textbf{X}_{den}\textbf{K}_{den}+ \textbf{b})\textbf{W}_{den}) \tag{1}\]
The input is a batch of randomly sampled domain coordinates \(\mathbf{X}_{den(\text{batchsize}\times 3)}\). We use the domain center as the origin for the coordinates, and the coordinates are normalized with the longest dimension coordinates ranging from -0.5 to 0.5. We use the concept proposed in Tancik et al. [13] and a neural network architecture similar to the one used in Chandrasekhar and Suresh [21]. The first layer weights (kernel \(\mathbf{K}_{den(3\times\text{kernelsize})}\)) are fixed, which create Fourier features after passing through the sine activation. We add a bias term \(\mathbf{b}\) consisting of ones before applying the sine activation to break off the symmetry about the origin. The kernel is created using a grid of number of dimensions same as the number of domain dimensions, and then reshaping the grid coordinates to the matrix \(\mathbf{K}_{den(3\times\text{kernelsize})}\). The grid size in each dimension dictates how good it can represent topological features, and the grid's range of values control the frequency of the output topology, with higher ranges of values giving a topology with more intricate features. Note that this grid is not a mesh structure, and consists solely of coordinates. We find that making the kernel trainable can slightly improve compliance. However, we keep it fixed for all the experiments in this paper assuming that the slight increase in performance may not be preferable to the large number of trainable weights. The next layer weights (\(\mathbf{W}_{den(\text{kernelsize}\times 1)}\)) are trainable and the output is passed through a sigmoid activation (\(\sigma\)). This ensures output values are between 0 and 1, which represent the density, for each of the coordinates in the input batch. We use Adam (Kingma and Ba [31]) as the optimizer, with a learning rate of \(2.0\times 10^{-3}\) for all the experiments.
### Displacement Neural Network
We use a neural network similar to the density neural network for approximating the displacement field. The physics-informed components shown in Samaniego et al. [4] are then added on the displacement output by this neural network \(\mathit{Disp}(\mathbf{X}_{disp})\). This can be represented as follows:
\[\mathit{Disp}(\mathbf{X}_{disp})=\sin(\mathbf{X}_{disp}\mathbf{K}_{disp}+ \mathbf{b})\mathbf{W}_{disp} \tag{2}\]
We use randomly sampled domain coordinates \(\mathbf{X}_{disp}\) in each displacement network iteration. The frequency determined by \(\mathbf{K}_{disp}\) should be greater than or equal to the frequency determined by \(\mathbf{K}_{den}\). This is due to the fact that if the displacement network in unable to capture and pass fine changes in displacement to the density network, and the density network is attempting to create very fine features, incorrect features are created and disconnections are observed in the final topology. For all our experiments, we use the same frequencies and grid sizes for \(\mathbf{K}_{disp}\) and \(\mathbf{K}_{den}\) and find this setting works well. Multiplying the Fourier features with \(\mathbf{W}_{disp(\text{kernelsize}\times 3)}\) gives the displacements in each direction.
#### 3.2.1 Displacement Constraints
Boundary conditions on displacements such as fixed sides, are implemented as hard constraints. The output of \(\mathit{Disp}(\mathbf{X}_{disp})\) is multiplied with a differentiable function that is 0 at the fixed boundary and 1 elsewhere. We use the
exponential function for this. For example, with a cuboidal domain that has the side with all \(x\) coordinates equal to zero, fixed (zero displacement in all three directions), such as in Figure 2a, the hard constraint function takes the form \(2(\frac{1}{(1+\exp(-m(c_{x}+0.5)))}-0.5)\) where \(c_{x}\) is all the \(x\) coordinates in the domain, and \(m\) is a constant which dictates the slope of this function. We find empirically \(m=20\) works well and use it for all our experiments. For multiple fixed sides, the displacements output by the neural network are multiplied by the functions for each fixed side.
#### 3.2.2 Minimum Potential Energy Loss Function
The principle of minimum potential energy is used for approximating the displacement field as proposed in Samaniego et al. [4]. The neural network learns the weights that output the displacements that minimize the potential energy, and thus learns to output static equilibrium displacements. With Monte-Carlo sampling, the loss function of the displacement neural network, \(L_{disp}=\) Potential Energy, for 3d problems, is defined as follows:
\[L_{disp}=ISE-EW \tag{3}\]
\[ISE=\frac{V}{N}\sum_{i}^{N}(\mu\epsilon_{i}:\epsilon_{i}+\frac{\lambda(trace( \epsilon_{i}))^{2}}{2}) \tag{4}\]
\[EW=\frac{A}{N_{b}}\sum_{i}^{N_{b}}Tu_{i} \tag{5}\]
where,
\(ISE=\) Internal Strain Energy
\(EW=\) External Work
\(V=\) domain volume
\(N=\) number of sample points in domain
\(\mu=\frac{E}{2(1+\nu)}\), \(\lambda=\frac{E\nu}{((1+\nu)(1-2\nu))}\)
\(E=\) Young's Modulus
\(\nu=\) Poisson's ratio
\(\epsilon_{i}=\) strain matrix at \(i^{th}\) point
\(A=\) area on which traction is applied
\(N_{b}=\) number of sample points on boundary
\(T=\) traction
\(u_{i}=\) displacement at \(i^{th}\) point
The symbol ':' indicates element wise multiplication, followed by a summation.
The strains are calculated using automatic differentiation in TensorFlow (Abadi et al. [32]). We use Adam as the optimizer, with a learning rate of \(5.0\times 10^{-6}\) for all the experiments.
### Integration of Density and Displacement Neural Networks
A topology optimization epoch starts by training the displacement network with randomly sampled coordinates, the corresponding current topology (found
by a forward pass through the density network) and the boundary conditions. The conventional SIMP method interpolation (\(E=E_{material}(\rho^{3})\), where \(\rho\) is the density) is used for obtaining the Young's modulus \(E\) at each of the randomly sampled domain points in each displacement network iteration. Then, with randomly sampled coordinates, a forward pass is performed through the density network and the displacement network to get the current topology and current compliance (Internal Strain Energy) respectively, which are passed to the density network loss function. The density network loss function is defined as follows:
\[L_{den}=\frac{c}{c_{0}}+\alpha(\frac{v}{V^{*}}-1)^{2} \tag{6}\]
where,
\(c=\) compliance
\(c_{0}=\) initial compliance
\(v=\) volume fraction
\(V^{*}=\) target volume fraction
\(\alpha=\) penalty constant
The compliance (\(c\)) is a function of the densities (\(\rho\)) and displacements (\(u\)), where the displacements are also dependent on the densities. As shown in Zehnder et al. [5], the total gradient of the compliance with respect to the densities is given by \(\frac{dC}{d\rho}=\frac{\partial C}{\partial\rho}+\frac{\partial C}{\partial u }\frac{du}{d\rho}=-\frac{\partial C}{\partial\rho}\), which needs to be incorporated while connecting the two neural networks and backpropagating the loss to the density network weights.
As shown in Algorithm 1, during each topology optimization iteration, the density network weights are updated once with a gradient descent step in Adam. Before starting the topology optimization, the displacement network is run with the initial domain as the topology, for \(n_{dispinit}=1000\) iterations for converging to static equilibrium displacements. Then in each topology optimization iteration, utilizing the concept of transfer learning as the topology does not change too drastically, we run the displacement network only for \(n_{disp}=20\) iterations. We determined the values of the \(n_{dispinit}\) and \(n_{disp}\) variables empirically to give the best results in the cases presented in the paper. Though we have to increase the topology optimization iterations, this reduction of 50 times in displacement network iterations significantly reduces the computational time without compromising the compliance of the results.
We run all of our experiments on a machine with 12th Gen Intel(R) Core(TM) i7-12700 2.10 GHz processor, 16 GB of RAM and Nvidia GeForce RTX 3060 GPU.
## 4 Results
We first compare our results with a conventional SIMP topology optimization (Andreassen et al. [33]) and NTopo (Zehnder et al. [5]) for a 2D cantilever beam problem. Then, for a 3D cantilever problem, we present the initial convergence history of the displacement network, and the convergence history of
the compliance. Subsequently, we perform a case study for the 3D cantilever beam problem over varying volume fractions and load locations, comparing our results with a conventional 3D SIMP topology optimizer (Liu and Tovar [34]), and with a method using the Fourier Features neural network for density representation and FEA for compliance calculation (similar to Chandrasekhar and Suresh [21]). Then, we present an example showcasing the trade-offs in DMF-TONN and SIMP in terms of the compliance and computational time. Lastly, we validate our approach on some additional examples.
The output of our network represents an implicit function of the spatial densities. We use the marching cubes algorithm for generating the renders of the results. It should be noted that with DMF-TONN, one can sample infinitely many points within the domain, as a continuous and differentiable function has been learned by the density network. In this paper, we use twice the number of samples of the FEA grid size in each direction for the DMF-TONN figures for each example. On all solutions obtained by our approach, we run FEA for calculating the final compliance and use this to compare against the compliance obtained by SIMP (using the same FE solver) for consistency. In all the figures, 'c' refers to compliance and 'vf' refers to volume fraction. We ensure the degrees of freedom available are always more for SIMP than for DMF-TONN. Moreover, in SIMP, we use a density filter with a radius 1.5 times the mesh element size in each of the presented examples, ensuring thin features and lower compliances are possible, and there is no compromise in the SIMP results.
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline Boundary Conditions & SIMP & NTopo & DMF-TONN \\ \hline Convergence Compliance & \(4.94\times 10^{-4}\) & \(4.23\times 10^{-4}\) & \(4.76\times 10^{-4}\) \\ Volume Fraction Achieved & 0.30 & 0.30 & 0.30 \\ Binary Structure Compliance & \(4.55\times 10^{-4}\) & \(3.79\times 10^{-4}\) & \(4.01\times 10^{-4}\) \\ Time & 124 s & 1461 s & 275 s \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison for 2D cantilever beam problem with target volume fraction = 0.3
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline Boundary Conditions & SIMP & NTopo & DMF-TONN \\ \hline Convergence Compliance & \(2.86\times 10^{-4}\) & \(2.54\times 10^{-4}\) & \(2.73\times 10^{-4}\) \\ Volume Fraction Achieved & 0.50 & 0.50 & 0.50 \\ Binary Structure Compliance & \(2.66\times 10^{-4}\) & \(2.40\times 10^{-4}\) & \(2.52\times 10^{-4}\) \\ Time & 69 s & 1465 s & 277 s \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison for 2D cantilever beam problem with target volume fraction = 0.5
Figure 3: Case Study of 3D Cantilever Beam Problem
Figure 2: Convergence history of DMF-TONN and result comparison with SIMP for 3D cantilever beam problem with target volume fraction = 0.3
### Comparison of DMF-TONN, SIMP and Ntopo for 2D Cantilever Beam example
We compare the compliances and computational times for a 2D cantilever beam example in Tables 1 and 2. We run the SIMP code (Andreassen et al. (2017)) with the default convergence criterion, and run NTopo for 200 iterations as shown in (Zehnder et al. (2017)) for a similar 2D cantilever beam example. We run our method for \(n_{opt}=2000\) topology optimization iterations (determined empirically for 2D problems), with \(n_{disp}=20\) iterations of the displacement network in each of these topology optimization iterations. We also compare the binary (0 and 1 density values) structure compliance (ensuring the volume fraction remains the same after thresholding) which provides the actual compliance if the optimized structures were to be used in practice. We observe that though our method is slower than SIMP, it results in a mesh-free optimization with a better compliance than SIMP and a faster computational time than NTopo.
### Analysis of a 3D cantilever beam example
In Figure 2, we present the convergence history of DMF-TONN and a comparison with 3D SIMP topology optimization for an example with boundary conditions shown in Figure 2a. We use a \(40\times 20\times 8\) grid for the SIMP FEA, which gives 6400 design variables for the topology optimization, and \(6400\times 3=19200\) degrees of freedom (DOF) for the FEA. We use a lesser DOF model, with the number of trainable weights in both our density and displacement networks being 4096 each. We use an initial topology consisting of uniform densities of 0.5 (\(\rho_{init(40\times 20\times 8)}=\mathbf{0.5}\)) as an input for initial training of the displacement network (PINN) and Figure 2d shows the convergence history. Figures 2e, 2f show the displacement field in the \(y\) direction at the cross-section of the domain where the force is applied and the maximum displacement value at different iterations of the initial run of the displacement network. Figure 2c shows the FEA displacement field at this cross section. We see that at the \(1000^{th}\) iteration, the displacement network can learn a very good approximation of the FEA displacement field.
In each of the \(n_{opt}=700\) topology optimization iterations (determined empirically for 3D problems), we use only \(n_{disp}=20\) displacement network iterations and show that this is adequate for achieving results (2h) similar to SIMP (2b). Figure 2g shows the converge history for the topology optimization, with FEA compliance plotted for consistency with SIMP. Our approach takes 588 seconds compared to 227 seconds for SIMP, but achieves mesh-free topology optimization and a better compliance than SIMP for this example.
### Case Study of 3D Cantilever Beam problem
In Figure 3 we compare our fully mesh-free approach with a Finite Element Analysis based Neural Network Topology Optimization (FENN TO) approach (i.e. using a Fourier Features neural network for representing density field and FEA for compliance calculation) and with the conventional SIMP approach. We vary the load location and target volume fraction for the 3D cantilever
\begin{table}
\begin{tabular}{l|l|l|l} \hline \hline Boundary Conditions & DMF-TONN & SIMP (\(60\times 20\times 8\) grid) & SIMP (90\(\times\)30\(\times\)12 grid) \\ \hline Displacement Calculation & 3456 & 28800 & 97200 \\ Degrees of Freedom & & & \\ No. of Optimization Design & 3456 & 9600 & 32400 \\ Variables & & & \\ Convergence Compliance & \(2.40\times 10^{-3}\) & \(2.65\times 10^{-3}\) & \(2.38\times 10^{-3}\) \\ Volume Fraction Achieved & 0.30 & 0.30 & 0.30 \\ Computational Time & 517 s & 99 s & 807 s \\ \hline \hline \end{tabular}
\end{table}
Table 3: Long cantilever beam with bottom load with target volume fraction \(=0.3\)
Figure 4: Long cantilever beam with center load with target volume fraction \(=0.3\)
beam problem. All approaches are run for 700 iterations. We show the plots of the compliance of all three approaches for all these boundary conditions in Figure 2(b) where the x-axis contains the discrete boundary conditions and y-axis represents the compliance for each of these boundary conditions. Our fully mesh-free approach achieves similar compliance values to the existing SIMP and FENN TO approaches for all the boundary conditions in this case study. The total computational times for running all the examples in this case study are 236 minutes, 96 minutes and 124 minutes for DMF-TONN, SIMP and FENN TO respectively.
### Trade-off Analysis of DMF-TONN and SIMP
In Table 3, we present the results for a right bottom end loaded cantilever beam with the ratio of 3 for the lengths of sides in the \(x\) and \(y\) directions (the orientation of the axes is the same as in Figure 1(a)). For the Top3D (SIMP) method, we use their stated convergence criteria of 200 maximum iterations and 0.01 as tolerance of change in topology. Though the degrees of freedom are lesser for DMF-TONN, it still achieves a better compliance than SIMP with a grid of \(60\times 20\times 8\). However, the computational time is much higher for DMF-TONN. Now, we increase the grid size of SIMP to \(90\times 30\times 12\) for achieving a better compliance than DMF-TONN. However, as seen in Table 3, due to the 1.5 times increase in grid size in each direction, there was an exponential increase in computational time for the SIMP method and the computational time of DMF-TONN is lesser than the fine mesh SIMP. This showcases one of the advantages of the mesh-free nature of DMF-TONN, presenting interesting opportunities for tradeoffs to be explored in future research.
### Additional Examples
In Figure 4, the load acts at the center of the right side of a long cantilever beam. The compliance values obtained by DMF-TONN for these boundary conditions are better than those obtained by Top3D (SIMP) (\(60\times 20\times 8\) grid). In Figure 5, the boundary conditions include two loads twisting a beam fixed on its one side. The ideal topology should be a hollowed-out beam, and DMF-TONN correctly outputs a similar topology. For this example, we observe that the compliance of the SIMP (\(60\times 15\times 15\) grid) result is better than that of DMF-TONN. In Figure 6 we present a case with a passive (non-design) region is present in the upper right quarter of the domain. This condition is enforced using an additional constraint violation objective for the density network, where if the density network outputs density values close to 1 in the passive region, the loss is penalized. For this L-Bracket example, the compliance of the result obtained by DMF-TONN is better than that obtained by SIMP (\(30\times 30\times 10\) grid). In Figure 7, both left and right sides are fixed and the load acts at the center of the beam. The compliances of the results obtained by SIMP (\(60\times 20\times 8\) grid) and DMF-TONN are similar for this example.
The computational time for DMF-TONN for each of the presented examples is less than 600 seconds. We use a Young's Modulus (E) \(=1000N/mm^{2}\), Poisson's ratio of 0.3, \(\text{Force}=0.1N\) and we normalize the domain with the longest
Figure 5: Long cantilever beam with two loads with target volume fraction = 0.5
Figure 6: L-Bracket with target volume fraction = 0.2
Figure 7: Bridge with target volume fraction = 0.3
\(\text{side}=1mm\) for all examples. We use 6000 randomly sampled domain points as the batch in each iteration. We use a kernel grid size of 16 in all three directions and set upper and lower bounds of the kernel values to 35 and -35 respectively. We find this hyper-parameters setting works well for all problems, unless when the domain has a markedly skewed size ratio, such as in the long beam and bridge examples. In those cases, one has to accordingly adjust the kernel size in the different directions. In these examples, we use 24 kernel grid points in the longest direction, and 12 kernel grid points in the other directions, and an upper and lower bounds of the kernel values of 45 and -45 respectively, for the results presented. For each of the presented examples, the degrees of freedom are always more for SIMP than for DMF-TONN.
## 5 Conclusion
We show that using directly connected displacement field estimation and density field estimation neural networks is indeed an effective approach for mesh-free topology optimization. We verify through various examples that DMF-TONN, which involves using just one gradient descent step of the density network in each topology optimization epoch without any sensitivity filtering or density filtering leads to comparable results to conventional topology optimization. We significantly reduce the computational time compared to prior related works and also explore the trade-offs between DMF-TONN and SIMP for a cantilever beam example, showcasing the advantage of the mesh-free nature of DMF-TONN.
There are several limitations observed currently with DMF-TONN. The first one concerns the kernels used in the density and displacement neural network. The kernel grid has to be scaled according to the domain size if there exist sides with ratio of lengths greater than or equal to 3. Moreover, we observed that for low target volume fractions (less than 0.2), the kernel is not able to capture the required features and the optimization does not converge.
The SIMP method is certainly still one of the best methods for performing topology optimization, and we emphasize that the goal of this work is not to beat SIMP, but rather to devise new techniques and a class of mesh-free solvers using the advancements in neural networks, for this complex inverse design problem of topology optimization. We show that DMF-TONN works well for various 3D problems and is a stepping stone towards this goal. Future work involves improving the robustness of the kernel, extending the approach to complex problems, and experimenting with and analyzing the effect and advantages of different domain coordinate sampling methods.
## 6 Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. |
2307.08636 | PolyGNN: Polyhedron-based Graph Neural Network for 3D Building
Reconstruction from Point Clouds | We present PolyGNN, a polyhedron-based graph neural network for 3D building
reconstruction from point clouds. PolyGNN learns to assemble primitives
obtained by polyhedral decomposition via graph node classification, achieving a
watertight, compact, and weakly semantic reconstruction. To effectively
represent arbitrary-shaped polyhedra in the neural network, we propose three
different sampling strategies to select representative points as
polyhedron-wise queries, enabling efficient occupancy inference. Furthermore,
we incorporate the inter-polyhedron adjacency to enhance the classification of
the graph nodes. We also observe that existing city-building models are
abstractions of the underlying instances. To address this abstraction gap and
provide a fair evaluation of the proposed method, we develop our method on a
large-scale synthetic dataset covering 500k+ buildings with well-defined ground
truths of polyhedral class labels. We further conduct a transferability
analysis across cities and on real-world point clouds. Both qualitative and
quantitative results demonstrate the effectiveness of our method, particularly
its efficiency for large-scale reconstructions. The source code and data of our
work are available at https://github.com/chenzhaiyu/polygnn. | Zhaiyu Chen, Yilei Shi, Liangliang Nan, Zhitong Xiong, Xiao Xiang Zhu | 2023-07-17T16:52:25Z | http://arxiv.org/abs/2307.08636v1 | # PolyGNN: Polyhedron-based Graph Neural Network for 3D Building Reconstruction from Point Clouds
###### Abstract
We present PolyGNN, a polyhedron-based graph neural network for 3D building reconstruction from point clouds. PolyGNN learns to assemble primitives obtained by polyhedral decomposition via graph node classification, achieving a watertight, compact, and weakly semantic reconstruction. To effectively represent arbitrary-shaped polyhedra in the neural network, we propose three different sampling strategies to select representative points as polyhedron-wise queries, enabling efficient occupancy inference. Furthermore, we incorporate the inter-polyhedron adjacency to enhance the classification of the graph nodes. We also observe that existing style-building models are abstractions of the underlying instances. To address this abstraction gap and provide a fair evaluation of the proposed method, we develop our method on a large-scale synthetic dataset covering 500K+ buildings with well-defined ground truths of polyhedral class labels. We further conduct a transferability analysis across cities and on real-world point clouds. Both qualitative and quantitative results demonstrate the effectiveness of our method, particularly its efficiency for large-scale reconstructions. The source code and data of our work are available at [https://github.com/chenhaiyu/polygn](https://github.com/chenhaiyu/polygn).
## 1 Introduction
Three-dimensional (3D) building models constitute an important infrastructure in shaping digital twin cities, facilitating a broad range of applications including urban planning, energy demand estimation, and environmental analysis (Biljecki et al., 2015; Opoku et al., 2021). Therefore, efficient reconstruction of high-quality 3D building models is crucial for understanding an urban environment and has been a long-standing challenge.
Most reconstruction methods are dedicated to detailed surfaces represented by dense triangles (Kazhdan and Hoppe, 2013; Erler et al., 2020; Stucker et al., 2022), irrespective of the ubiquitous piecewise planarity in the built environment. Instead, a compact polygonal representation with sparse parameters can adequately capture the geometry of urban buildings. To reconstruct compact polygonal building models, three categories of methods are commonly employed in practice. Model-based reconstruction methods (Zhou and Neumann, 2010; Li et al., 2016) represent buildings by utilizing a library of pre-defined templates. However, the limited variety of available templates constrains the expressiveness of these methods. Geometric simplification methods (Bouzas et al., 2020; Li and Nan, 2021) aim to obtain compact surfaces by simplifying dense triangle ones. These techniques, however, necessitate an input model that is precise in both its geometry and topology to ensure a faithful approximation. Primitive assembly methods (Nan and Wonka, 2017; Huang et al., 2022) produce polygonal surface models by pursuing an optimal assembly of a collection of geometric primitives. However, these methods often entail the use of handcrafted features and thus possess limited representational capacity.
Despite the successes in various other applications, learning-based solutions for reconstructing compact building models have been largely unexplored, where Points2Poly (Chen et al., 2022) is a pioneering effort with the primitive assembly strategy, which learns building occupancy based on an implicit representation followed by a Markov random field to promote compactness. By the design, the occupancy learning is agnostic of the primitive-induced hypothesis, resulting in the lack of efficiency and therefore hindering its application at scale.
In this paper, we introduce PolyGNN, a polyhedron-based graph neural network, for reconstructing building models from point clouds. PolyGNN utilizes the decomposition of a building's ambient space into a set of polyhedra as strong priors. It learns to assemble the polyhedra to achieve a watertight, compact, and weakly semantic reconstruction formulated as end-to-end graph node classification. The neural network can be efficiently optimized, enabling building model reconstruction at scale.
Our key idea lies in coupling occupancy estimation with the polyhedral decomposition by primitive assembly. Instead of learning a continuous function with traditional deep implicit fields, we opt for learning a piecewise planar occupancy function from the polyhedra. There, one challenge involves consistently representing the heterogeneous geometry of arbitrary-shaped polyhedra. To this end, we propose sampling a set of representative points inside the polyhedron as queries. Conditioned on the latent building shape, these polyhedron-wise queries then collectively describe the
building occupancy. We propose three sampling strategies, namely volume sampling, boundary sampling, and skeleton sampling, and assess their respective performances.
Moreover, we observe that existing 3D city models are abstracted from real-world buildings, and they typically lack geometric details. Thus, using existing mesh models as ground truths is inherently inadequate due to systematic "errors". To facilitate a supervised learning setup for PolyGNN, we resort to creating a large-scale synthetic dataset comprised of simulated airborne LiDAR point clouds and building models. The synthetic dataset enables reliable one-to-one mapping between the two sources, effectively addressing the potential abstraction gap. Subsequently, we evaluate the transferability of our method across cities and on real-world point clouds.
The main contributions of this paper are summarized as follows:
* We introduce PolyGNN, a polyhedron-based graph neural network for reconstructing compact polygonal building models from point clouds. PolyGNN achieves end-to-end optimization through graph node classification.
* We propose three sampling strategies for representing arbitrary-shaped polyhedra in the neural network and assess their respective performances.
* We introduce a large-scale synthetic LiDAR dataset for developing learning-based urban building reconstruction methods, which consists of over 500k buildings with polyhedral class labels.
## 2 Related work
In this section, we discuss two categories of methods used for polygonal building model reconstruction: model-based reconstruction and primitive assembly. Following this, we introduce the line of research in neural implicit representation, from which we draw inspiration for our work.
### Model-based reconstruction
Model-based reconstruction methods represent a building by utilizing a library of common building components, in the form of pre-defined templates.
Although satellite data offers prospects for global building models, they often lack the necessary level of detail (LoD) and quality, and thus are primarily limited to 3D models at LoD1 (Zhu et al., 2022; Sun et al., 2022). The Manhattan-world assumption restricts the orientation of building surfaces in the three dominant directions and represents buildings with axis-aligned polycubes (Ikehata et al., 2015; Li et al., 2016, 2016). Another common practice is restricting the output surface to specific disk topologies. For example, the 2.5D view-dependent representation (Zhou and Neumann, 2010) can generate building roofs with vertical walls connecting them from LiDAR measurements. Xiong et al. (2014, 2015) exploit roof topology graphs for reconstructing LoD2 buildings from predefined building primitives, which is extended by Mwangangi (2019) for UAV images. Similarly, Li et al. (2016) present a workflow to reconstruct building mass models from UAV measurements. Kelly et al. (2017) formulate a global optimization to produce structured urban reconstruction from street-level imagery, GIS footprint, and coarse 3D mesh. The model-based approaches simplify the reconstruction with uniformity assumptions and are thus efficient to implement. However, they only apply to specific domains as a limited variety of the models constrains the expressiveness of these methods. Our reconstruction method, instead, does not rely on a model library, thus remaining generic.
### Primitive assembly
Primitive assembly methods can produce compact polygonal surface models by pursuing an optimal assembly of a set of geometric primitives.
Connectivity-based approaches (Chen and Chen, 2008; Van Kreveld et al., 2011; Schindler et al., 2011) address the assembly by extracting proper geometric primitives from an adjacency graph built on planar shapes. While efficient in analyzing the graph, these methods are sensitive to the quality of the graph. Linkage errors contaminating the connectivity can compromise the reconstruction. A hybrid strategy proposed by Lafarge and Alliez (2013); Holzmann et al. (2018) represents high-confidence areas by polygons and more complex regions by dense triangles. Arikan et al. (2013) presents an interactive optimization-based snapping solution, which requires labor-intensive human involvement in handling complex structures.
Slicing-based approaches are more robust to imperfect data with the hypothesis-and-selection strategy. In practice, planar primitives can be detected from RANSAC (Schnabel et al., 2007), region growing (Rabbani et al., 2006), and via neural networks (Li et al., 2019; Le et al., 2021). With the primitives, Chauve et al. (2010); Mura et al. (2016); Nan and Wonka (2017); Bauchet and Lafarge (2020) partition the 3D space into polyhedral cells by extending the primitives to supporting planes, transforming the reconstruction into a labeling problem where the polyhedral cells are labeled as either inside or outside the shape or equivalently by labeling other primitives. Li and Wu (2021) extend PolyFit (Nan and Wonka, 2017) to leverage the inter-relation of the primitives for procedural modeling. Huang et al. (2022) further extend PolyFit by introducing a new energy term to encourage roof preferences and two additional hard constraints to ensure correct topology and enhance detail recovery. Fang and Lafarge (2020) propose a hybrid approach for reconstructing 3D objects by successively connecting and slicing planes identified from 3D data. A combined approach of rule-based and hypothesis-based strategies is proposed by Xie et al. (2021). Our method inherits primitive assembly while achieving selection by a graph neural network with the polyhedra from convex decomposition.
### Implicit neural representation
Recent advances in deep implicit fields have revealed their potential for 3D surface reconstruction (Park et al.,
2019; Peng et al., 2020; Erler et al., 2020; Yao et al., 2021; Williams et al., 2022), and also specifically for buildings (Stucker et al., 2022). The crux of these methods is to learn a continuous function to map the input, such as a point cloud, to a scalar field. The surface of an object can then be extracted using iso-surfacing techniques like Marching Cubes (Lorensen and Cline, 1987). To learn a more regularized field, Rella et al. (2022) and Yang et al. (2023) both propose to learn the displacements from queries towards the surface and model shapes as vector fields. However, these methods still require iso-surfacing to extract the final surfaces. Though iso-surfacing is effective in extracting smooth surfaces, it struggles to preserve sharp features and introduces discretization errors. Consequently, deep implicit fields alone are not inherently suitable for reconstructing compact polygonal models. By incorporating constructed solid geometry, Chen et al. (2020) and Deng et al. (2020) both introduce implicit fields to reconstructing convexes obtained via binary space partitioning. The inputs to these neural networks are images and voxels, while we focus on point clouds.
Points2Poly (Chen et al., 2022) is a pioneering learning-based effort for polygonal building reconstruction. The key enabler is a learned implicit representation that indicates the occupancy of a building, followed by a Markov random field for a favorable geometric complexity. By its design, Points2Poly is composed of two separate parts, hence cannot be optimized end-to-end. The occupancy learning is agnostic of the hypothesis. This limits its exploitation of deep features, and in turn, limits its efficiency. The prohibitive complexity hinders its application at scale. In contrast, our method directly learns to classify the polyhedra with an end-to-end neural architecture, underpinning great efficiency.
## 3 Methodology
### Overview
We formulate building reconstruction as a graph node classification problem. As shown in Figure 1, we first decompose the ambient space of a building into a cell complex of candidate polyhedra following binary space partitioning. PolyGNN represents the cell complex as a graph structure and classifies the polyhedral nodes into two classes: _interior_ and _exterior_. Finally, the building surface model can be extracted as the boundary between the two classes of polyhedra.
The above procedure can be formulated as follows. Given an unordered point set \(\mathcal{X}=\{x_{1},x_{2},...,x_{q}\}\) with \(x_{i}\in\mathbb{R}^{3}\) as input, we first decompose the ambient space into an undirected graph embedding \(\mathcal{G}=(\mathcal{V},\mathcal{E}\mid\mathcal{X})\), where \(\mathcal{V}=\{v_{1},v_{2},...,v_{m}\}\) and \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\) represent non-overlapping convex polyhedra and their edges, respectively. \(\mathcal{G}\) serves as a volumetric embedding, from which we seek an appropriate subset of \(\mathcal{V}\) to align with the occupancy of the underlying building instance. The surface reconstruction is therefore transformed into an assignment problem which we address with a graph neural network \(\vec{f}\):
\[\vec{f}\approx f(\mathcal{V}\mid\mathcal{X},\mathcal{E})=Y, \tag{1}\]
where \(Y=\{y_{1},y_{2},...,y_{m}\}\subseteq\{0,1\}\). Figure 2 illustrates the architecture of PolyGNN for solving the graph node classification problem, which consists of two stages:
* **Polyhedral graph encoding**. A graph structure is constructed from polyhedral decomposition, with polyhedra being graph nodes. Node features are formed by conditioning polyhedron-wise queries on a multi-scale shape latent.
* **Graph node classification**. With the encoded node features and inter-polyhedron adjacency, graph nodes are classified for building occupancy estimation.
PolyGNN is end-to-end optimizable for polyhedra classification. In the following, we elaborate on the two main components of the network.
### Polyhedral graph encoding
#### 3.2.1 Polyhedral graph construction
We adopt the adaptive binary space partitioning approach introduced by Chen et al. (2022). We first identify a set of planar primitives from the input point cloud that comprises the building and partition the ambient 3D space to generate a linear cell complex of non-overlapping polyhedra that complies with the primitives. As illustrated in Figure 3, vertical primitives and primitives with larger areas are given higher priority. The tessellation is spatially adaptive therefore being efficient and respective to the building's geometry. The partitioning also involves the construction of a binary tree, which records the hierarchical information of polyhedra and their inter-polyhedron adjacency.
Figure 1: Reconstruction by polyhedra classification. Candidate polyhedra (a) are generated by polyhedral decomposition and are classified by PolyGNN into _interior_ ones and _exterior_ ones (b). The surface (c) is extracted in between pairs of polyhedra of different classes. (d) (e) (f) are illustrations of 2D cross sections of (a) (b) (c), respectively.
#### 3.2.2 Point cloud encoding
By adaptive binary space partitioning, the polyhedral embedding \(\mathcal{G}\) is produced in the Euclidean space, from which we seek an appropriate subset of \(\mathcal{V}\) to align with the occupancy of the underlying building instance. In addition, we obtain the shape latent \(\mathbf{z}\) embedded in the feature space, via a graph neural network.
We transform \(\mathcal{X}\) into a neural feature representation, a process which, in principle, can be achieved through any point cloud encoder. We experimented and chose a lightweight yet efficient one. Specifically, multi-level features are encoded with layers of dynamic edge convolutions (Wang et al., 2019):
\[g^{l}_{x_{i}}=\sum_{x_{j}\in\mathcal{N}(x_{i})}\Phi\left(\left[h^{l}_{x_{i}},h^ {l}_{x_{j}},e_{x_{i},x_{j}}\right]\right), \tag{2}\]
where \(g^{l}_{x_{i}}\) is the feature representation of point \(x_{i}\) at layer \(l\). \(\mathcal{N}\left(x_{i}\right)\) is the set of neighboring points of \(x_{i}\). \(h^{l}_{x_{i}}\) and \(h^{l}_{x_{j}}\) are the feature representation of point \(x_{i}\) at layer \(l\) and that of point \(x_{j}\) at layer \(l\), respectively. \(e_{x_{i},x_{j}}\) is the edge feature between \(x_{i}\) and \(x_{j}\), and \(\Phi\) is a multi-layer perceptron (MLP) that maps the concatenated input to a new feature space. These multi-level features are further concatenated and aggregated by an MLP, followed by a max pooling operator to form a global latent feature denoted as \(\mathbf{z}\):
\[\mathbf{z}=\Phi\max_{i=1}^{n}\left(\left[g^{1},g^{2},\ldots,g^{L}\right] \right), \tag{3}\]
where \(L\) represents the maximum level, i.e., the number of edge convolution layers.
#### 3.2.3 Query sampling
To encode an arbitrary-shaped polyhedron, one challenge lies in consistently describing the heterogeneous polyhedral geometry. To address this, we propose sampling representative points from inside the polyhedron and coerc
Figure 3: Illustration of adaptive binary space partitioning. During partitioning, a binary tree is dynamically constructed to analyze inter-polyhedron adjacency. \(t\) denotes iteration.
Figure 2: Architecture of PolyGNN. Given an input point cloud, a graph structure is constructed from polyhedral decomposition, with polyhedra being graph nodes. Node features are formed by conditioning polyhedron-wise queries on a multi-scale shape latent. With the encoded node features and inter-polyhedron adjacency, graph nodes are classified for building occupancy estimation.
the geometry into fixed-length queries of size \(k\). It is clear that the more representative the sampled points are, the more information they convey about the polyhedron. We evaluate three sampling strategies, namely volume sampling, boundary sampling, and skeleton sampling, as illustrated in Figure 4. The volume variant randomly takes points inside the volume of a polyhedron, carrying relatively the least amount of geometric information about the polyhedron. The boundary variant samples points on the boundary of a polyhedron with area-induced probability as outlined in Algorithm 3.1. This variant can better depict polyhedral occupancy with boundary information. Furthermore, as described in Algorithm 3.2, the skeleton variant picks samples from both vertices and principal axes, arguably offering the most efficient description of a polyhedron among the three variants. Vertices, because of their prominence in describing sharp geometry, are prioritized over points along the axes when a low \(k\) value is given.
```
Input: Vertices \(\mathcal{V}\), facets \(\mathcal{F}\), and #samples \(k\) Output: Representative points \(\mathcal{S}\)
1\(\mathcal{S}\), \(\mathcal{T}\), \(a\leftarrow\) init \(\emptyset\);
2for\(i\gets 1\) to \(|\mathcal{F}|\)do
3\(\mathcal{T}_{i}\leftarrow\) triangulate(\(\mathcal{V}_{\mathcal{T}_{i}}\)) ;
4\(a_{i}\leftarrow\) area(\(\mathcal{T}_{i}\)) ;
5\(\mathcal{T}^{\prime}\leftarrow\) sample\({}^{a}_{k}\) (\(\mathcal{T}\)) ;
6for\(j\gets 1\) to \(k\)do
7\(S_{j}\leftarrow\) sample(\(\mathcal{T}^{\prime}_{j}\)) ;
8
9return\(\mathcal{S}\)
```
**Algorithm 3.1**Boundary sampling (\(\mathcal{V}\), \(\mathcal{F}\), \(k\))
The representative points obtained by all three sampling strategies reduce the complexity of an arbitrary-shaped polyhedron to a fixed-size feature vector \(\mathcal{S}=\{s_{1},s_{2},...,s_{k}\}\) that can be consumed by the neural network while preserving geometric information to different extents. These representative points then serve as queries against \(\mathbf{z}\), resulting in the formation of polyhedron-wise features. Intuitively, these queries are used to jointly describe the occupancy of the underlying polyhedron. A performance comparison of the three sampling strategies is provided in Section 5.
#### 3.2.4 Forming polyhedron-wise features
Inspired by the recent advance in 3D shape representation learning (Park et al., 2019; Yao et al., 2021), we
Figure 4: Sampling representative points from a polyhedron. (a) - (c) visualize different strategies to sample the polyhedron highlighted in (d). **Volume**: Points are sampled randomly from inside the volume. **Boundary**: Points are sampled from the boundary as described in Algorithm 3.1. **Skeleton**: Points are sampled along the polyhedral skeleton as described in Algorithm 3.2. Representative points are colored by their parent polyhedra.
Figure 5: Fusion of shape latent and polyhedral queries to form shape-conditioned queries and subsequently polyhedron-wise features.
form a shape-conditioned implicit representation \(\mathbf{z}_{v}\) of the polyhedron by concatenating the coordinates of queries \(S_{v}\) with the latent feature \(\mathbf{z}\):
\[\mathbf{z}_{v}=\Phi\left(S_{v}\mid\mathbf{z}\right)=\Phi\left(\left[s_{1},s_{2},...,s_{k},\mathbf{z}\right]\right), \tag{4}\]
where \(\Phi\) represents an MLP. Figure 5 illustrates the formation of polyhedron-wise features. This formulation allows modeling multiple building instances with a single neural network. Intuitively, \(\mathbf{z}_{v}\) is a discrete occupancy function that, given a polyhedron, outputs the likelihood of the polyhedron being occupied by the building. Such a representation can be interpreted as a spatial classifier for which the decision boundary is the surface of the building. Note that instead of approximating a continuous implicit function by exhaustive enumeration, our discretized formulation takes geometric priors of individual polyhedra into account, which drastically reduces computational complexity and solution ambiguity, while being end-to-end optimizable. Figure 6 demonstrates such a distinction using a 2D example.
### Graph node classification
The polyhedron-wise features produced by Equation 4 do not yet consider inter-polyhedron adjacency. This adjacency provides additional information that can enhance the classification of individual polyhedra. To incorporate this topological information and thereby enhance the classification, we utilize another stack of graph convolution layers for graph node classification as outlined in Equation 1. Specifically, we employ topology-adaptive graph convolution (Du et al., 2017) for its adaptivity to the topology of the graph and computational efficiency. It utilizes a set of fixed-size learnable filters for graph convolution, defined as follows:
\[\mathbf{G}^{l+1}=\sum_{k=0}^{K}\left(\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{- 1/2}\right)^{k}\mathbf{G}^{l}, \tag{5}\]
where \(\mathbf{G}^{l+1}\) and \(\mathbf{G}^{l}\) denote the node features before and after the convolution at the \(l\)-th layer, respectively. \(\mathbf{A}\) is the adjacency matrix implied by \(\mathcal{E}\) in Equation 1, and \(\mathbf{D}=\text{diag}[\mathbf{d}]\) with the \(i\)-th component being \(d\) (\(i\)) = \(\sum_{j}\mathbf{A}_{i,j}\). \(K\) is the number of filters, whose topologies are adaptive to the topology of the graph as they scan the graph to perform convolution.
Multiple graph convolution layers defined in Equation 5 are stacked to increase the receptive field of the neural network. The feature of the \(i\)-th node \(G_{i}\) is then fed into a binary classification head with the softmax activation function to produce the likelihood of the polyhedron \(v_{i}\) being _interior_:
\[\hat{y}_{i}=\text{softmax}\left(\Phi\left(G_{i}\right)\right), \tag{6}\]
where \(\Phi\) represents an MLP.
We train the network by minimizing the discrepancy between the prediction and the ground truth. Since the non-building polyhedra dominate the candidate space, we employ focal loss to alleviate the class imbalance:
\[\begin{split}\mathcal{L}=-\frac{1}{N}\,\sum_{i=1}^{N}\big{[}y_{i }\cdot(1-\hat{y}_{i})^{\gamma}\log(\hat{y}_{i})\\ +(1-y_{i})\cdot\hat{y}_{i}^{\gamma}\log(1-\hat{y}_{i})\big{]}, \end{split} \tag{7}\]
where \(y_{i}\) represents the \(i\)-th element of the ground truth label vector, \(\hat{y}_{i}\) is derived from Equation 6. \(N\) is the total number of polyhedra, and \(\gamma\) is the focusing parameter. The network can be optimized end-to-end without any auxiliary supervision.
In the testing phase, given a building instance, we predict the occupancy of the candidate polyhedra. Then the surface lies in between pairs of polyhedra \(\{v_{i},v_{j}\}\) with different class predictions, i.e., \(y_{i}\neq y_{j}\), as shown in Figure 1.
## 4 Experimental settings
### Datasets
Unlike other applications that adhere to rigorous definitions of ground truths, reconstructing large-scale polygonal buildings presents a challenge due to the abstraction of existing building models, resulting in inevitable deviations from actual measurements, as shown in Figure 7. This abstraction would impede a supervised learning algorithm due to its inherent biases. To overcome this abstraction gap, we create a synthetic dataset comprised of simulated airborne LiDAR point clouds and their corresponding building mesh models. This dataset enables a reliable mapping between the two sources, thus contributing to the orthogonality of the proposed method with respect to classification.
Formally, let \(\mathcal{X}_{r}\) and \(Y_{m}\) be a real-world point cloud and its corresponding building model, respectively. Due to
Figure 6: Instead of learning a continuous function underpinned by points with traditional deep implicit fields (left), PolyGNN learns a piecewise planar occupancy function from polyhedral decomposition (right).
the abstraction gap, the mapping \(f\): \(\mathcal{X}_{r}\to Y_{m}\) cannot be accurately learned by a neural network. Instead, we opt to learn an auxiliary mapping \(f^{\prime}\): \(\mathcal{X}_{m}\to Y_{m}\) where \(\mathcal{X}_{m}\) is derived from \(Y_{m}\) by synthesizing \(\mathcal{X}_{r}\). Once \(f^{\prime}\) is learned, it can be applied to \(\mathcal{X}_{r}\) to obtain the corresponding output \(Y_{x}^{\prime}=f^{\prime}(\mathcal{X}_{r})\). Using synthetic data in our task offers two-fold advantages. First, it enables the learning of the desired mapping by circumventing the abstraction, allowing the classifier to be trained and evaluated independently of potential data discrepancies. Moreover, it facilitates the exploration of a large volume of "free" training data, which benefits the learning algorithm at large.
We utilize the _Helios++_ simulation toolkit (Winiwarter et al., 2022) to simulate airborne LiDAR scanning. LoD2 building models from Bavaria, Germany are used as references for their high quality and coverage (State of Bavaria, 2022). Artifacts such as noise and inter-building occlusions are intentionally included in the scanning process, to assimilate the distribution of \(\mathcal{X}_{m}\) and \(\mathcal{X}_{r}\), thereby enhancing the robustness of the neural network against real-world measurements. The virtual sensor closely emulates the characteristics of _Leica HYPERION2+_, utilizing an oscillating optics system with a pulse frequency of 1.5 MHz and a scan frequency of 150. We simulate an airborne survey performed by a _Cirrus SR22_ aircraft flying at an altitude of 400 m with a strip interval of 160 m. Our training dataset includes 281k+ buildings in Munich, Germany, with an additional 10k buildings reserved for evaluation. To assess the cross-city transferability of PolyGNN, we also synthesize data from 220k+ buildings in Nuremberg, Germany. In addition to the synthetic data, we apply the trained model directly to a real-world airborne LiDAR point cloud dataset containing 1,452 buildings. For an individual building, we generate a set of polyhedra with inter-polyhedron adjacency as pre-processing (see Section 3.2.1), and use ray tracing to determine the ground truth occupancy label for every polyhedron.
### Evaluation metrics
We utilize multiple criteria to evaluate the performance of the reconstruction. The classification accuracy directly impacts the fidelity of the reconstruction and is therefore evaluated. Furthermore, since the ground truths are reliably defined in our setting, we quantify the surface discrepancy between the reconstructed surface and the ground truth by calculating the Hausdorff distance \(H(A,B)\):
\[H(A,B)=\max\left\{\sup_{a\in A}\inf_{b\in B}d(a,b),\sup_{b\in B}\inf_{a\in A}d(a,b)\right\}, \tag{8}\]
where \(d(a,b)\) represents the distance between points \(a\) and \(b\). We randomly sample 10k points from both the reconstructed surface and the ground truth and calculate both the absolute and relative distances. For fair comparisons in the context of large-scale reconstruction, in the event of an unsolvable reconstruction (e.g., due to the absence _interior_ polyhedra, or a timeout), we assign the length of the largest side of the bounding box as the absolute distance, and 100% as the relative distance. Additionally, we measure the geometric complexity of the reconstructed building models concerning the number of faces they comprise and measure the computational efficiency with respect to the running time with a 5-min timeout for an individual building.
### Implementation details
For point cloud encoding, we set the number of layers of dynamic edge convolutions \(L\) to 3. The same setting also applies to the graph convolutions for node classification. We implemented adaptive space partitioning with robust Boolean spatial operations from _SageMath_(The Sage Developers, 2021). The size of the global latent feature \(t\) is set to 256. For query sampling, we set the number of samples per polyhedron \(k\) to 16 for all three sampling strategies, to balance the representativeness and computational complexity. Leaky ReLU is used as the activation function. Although our implementation supports variable-length input point clouds with mini-batching, unless otherwise specified, the input point clouds are downsampled to 4,096 points for efficiency. Point clouds are normalized before being fed into the network and are rescaled for computing the Hausdorff distance. All experiments are optimized by the Adam with a base learning rate \(10^{-3}\) and weight decay \(10^{-6}\), using 4 \(\times\) NVIDIA RTX A6000 GPUs with batch size 64. The network variants are trained for 50 epochs for the ablation experiments and continue until 150 epochs for the best model.
## 5 Results and analysis
### Reconstruction performance
PolyGNN achieves an average accuracy of 96.4% for polyhedra classification and an average error of 0.81 m Hausdorff distance on the held-out Munich evaluation set. Notably, we observe a strong correlation between these two metrics. On building instances of similar levels of geometric complexity, accurate classification often leads to lower
Figure 7: Examples of abstraction gaps between real-world point clouds \(\mathcal{X}_{r}\) and existing building models \(Y_{m}\). Instead of learning \(f\): \(\mathcal{X}_{r}\to Y_{m}\), we learn an auxiliary mapping \(f^{\prime}\): \(\mathcal{X}_{m}\to Y_{m}\), where \(\mathcal{X}_{m}\) is derived from \(Y_{m}\) by synthesizing \(\mathcal{X}_{r}\). Point clouds are rendered by their height fields.
geometric errors. The reconstructed building models, as shown in Figure 8, demonstrate conformity to the distribution of the point clouds while maintaining compactness for potential downstream applications. Buildings with simpler geometry are of more regularity in the reconstruction. Furthermore, following space partitioning, the reconstruction demonstrates weak semantic associations, as depicted by the colored polyhedra in Figure 8.
To evaluate the transferability of PolyGNN, we applied the model trained on Munich data to reconstruct the buildings in Nuremberg. Figure 9 showcases the reconstruction of the downtown area of Nuremberg. Quantitatively, the classification accuracy achieves 96.3%, and the Hausdorff distance measures 0.78 m. The comparable accuracy demonstrates a commendable cross-city transferability of our approach when confronted with buildings that may vary in architectural styles. The prediction takes about 4 min for 4,185 buildings in the downtown area, highlighting the efficiency of our approach for large-scale reconstruction.
Figure 10 depicts the reconstruction results obtained by directly applying the model trained exclusively on synthetic data to a real-world point cloud in Munich. As expected, there is a domain gap between synthetic and real-world data, resulting in suboptimal reconstructions for certain buildings, especially those with architectural styles that are less represented in the training data. Nevertheless, it is noteworthy that the majority of the reconstructed buildings align well with the distribution of the input point clouds. Figure 11 further demonstrates cases where we apply the trained model with extracted planar primitives by RANSAC. By learning the underlying mapping, the reconstruction can approximate the point cloud distribution better than the ground truth, which validates the effectiveness of learning the auxiliary mapping.
### Ablation study
As shown in Table 1, among the three sampling strategies presented in Figure 4, skeleton sampling demonstrates superiority in terms of both classification accuracy and geometric accuracy, followed by boundary sampling. This finding aligns with the fact that both skeleton sampling and boundary sampling leverage more explicit geometric information compared to the volume counterpart, and the skeleton of a polyhedron captures the most critical information conveyed by its vertices and principal axes.
\begin{table}
\begin{tabular}{c c c} \hline Query sampling & Accuracy (\%) \(\uparrow\) & Error (m) \(\downarrow\) \\ \hline Random & 94.5 & 1.20 \\ Boundary & 94.7 & 1.12 \\ Skeleton & **95.5** & **1.08** \\ \hline \end{tabular}
\end{table}
Table 1: Impact of query sampling strategy on model performance.
Figure 8: Reconstruction examples on Munich data. From top to bottom: input point cloud, polyhedra classified as building components (colored randomly), reconstructed model, and the ground truth model. Point clouds are rendered by their height fields.
The individual contributions of the classification head and the adjacency information to the reconstruction performance are analyzed through another ablation experiment, as presented in Table 2. Removing the classification head and replacing it with a regression head causes the network to collapse completely, leading to the prediction of every polyhedron as an _exterior_ one. In this case, the classification accuracy represents the average proportion of _interior_ polyhedra. The high dominance of in-building polyhedra (87.3%) justifies the use of focal loss in our network design. Furthermore, the results provide clear evidence that incorporating inter-polyhedron adjacency information significantly enhances the reconstruction performance compared to relying solely on monotonic polyhedral information (95.5% vs. 93.7%). This improvement suggests that PolyGNN effectively exploits neighborhood information for occupancy estimation. Additionally, Figure 12 visually demonstrates the effectiveness of such information where the network utilizes adjacency information to achieve a more regularized reconstruction. This regularization is analogous to the MRF employed in Chen et al. (2022), while with PolyGNN this regularization is integrated into the feature space, avoiding any additional computational overhead.
PolyGNN is designed to be agnostic to the number of points, allowing for point clouds with different sizes as inputs with advanced mini-batching. In Table 3, we present a comparison of three different point cloud sampling options: random sampling, coarse grid sampling with a resolution of 0.05 within a unit cube, and fine grid sampling with a resolution of 0.01 within the same unit cube. The results demonstrate that random sampling outperforms grid sampling with both resolutions in terms of accuracy. It is worth noting that our random sampling strategy is dynamic, where different random points are selected in different epochs during training. This dynamic random sampling can also be considered a form of data augmentation. Although computationally more efficient, coarse grid sampling does not entail sufficient details for the input point cloud. Interestingly, fine grid sampling leads to significantly longer training times, yet it yields lower accuracy compared to random sampling. This inferiority can be attributed to the inherent difficulty of encoding the global shape descriptor \(\mathbf{z}\) from inputs of different sizes.
\begin{table}
\begin{tabular}{c c c c} \hline Classification & Adjacency & Accuracy (\%) \(\uparrow\) & Error (m) \(\downarrow\) \\ \hline ✗ & ✗ & 87.3 & - \\ ✓ & ✗ & 93.7 & 1.80 \\ ✓ & ✓ & **95.5** & **1.08** \\ \hline \end{tabular}
\end{table}
Table 2: Impact of classification head and adjacency information on the model performance. “-” indicates complete failure where no model is reconstructed.
Figure 9: Reconstruction of Nuremberg downtown buildings with PolyGNN trained on the Munich data.
### Comparison with state-of-the-art methods
Table 4 presents a comparison between our method and state-of-the-art methods in urban reconstruction, while Figure 13 showcases examples for visual comparison. The 2.5D DC method (Zhou and Neumann, 2010) was unable to represent building models with a concise set of parameters. In contrast, the other three methods exhibit compact reconstructions. Compared to the traditional optimization-based approach City3D (Huang et al., 2022), our method demonstrates the capability to handle more complex buildings commonly found in large-scale urban scenes. City3D adopts exhaustive partitioning, leading to a larger candidate space for searching. It only managed to reconstruct 9,887 out of 10,000 buildings within a 5-min timeout using its Gurobi solver (Gurobi Optimization, LLC, 2023), resulting in inferior reconstruction accuracy as measured by the balanced Hausdorff distance. In contrast, our approach utilizes adaptive space partitioning, resulting in a more compact candidate space that enhances both efficiency and overall accuracy. Furthermore, in comparison to the learning-based method Points2Poly (Chen et al., 2022), our method leverages an end-to-end neural architecture that underpins efficiency, while achieving comparable geometric accuracy.
Figure 11: Reconstruction from real-world point clouds with extracted planar primitives by RANSAC. From left to right: input point cloud colored by height field, the same input colored by primitives, cell complex, reconstructed model, ground truth. Note how the reconstruction approximates the point cloud distribution more than the ground truth does. Point clouds are rendered by their height fields.
Figure 12: Impact of adjacency in PolyGNN reconstruction. From left to right: input, reconstructed model w/o adjacency, reconstructed model w/ adjacency, ground truth. The errors are measured by Hausdorff distance. The point cloud is rendered by its height field.
\begin{table}
\begin{tabular}{c c c} \hline Point sampling & Accuracy (\%) \(\uparrow\) & Time train (h) 1 \\ \hline Grid (res. 0.05) & 94.6 & **0.7** \\ Grid (res. 0.01) & 94.8 & 24.6 \\ \hline Random & **95.5** & 2.1 \\ \hline \end{tabular}
\end{table}
Table 3: Impact of point cloud sampling strategy on model performance. “res” represents grid resolution relative to a unit cube.
Figure 10: Reconstruction from a real-world point cloud in Munich with PolyGNN trained exclusively on the synthetic data.
Figure 14 presents the running time comparison among the different methods, highlighting the superior efficiency of our approach. City3D, which relies on an integer programming solver, experiences computational bottlenecks when the number of planar primitives increases. As a result, for certain complex buildings, the reconstruction cannot be solved within a feasible time frame of 24 hours. Points2Poly requires approximately 4 days for 1 epoch of training, whereas our method only takes 2 hours (50\(\times\) faster). The longer inference time of Points2Poly, on the other hand, comes mostly from two factors. Firstly, more efforts are required for its occupancy estimation. Table 5 presents the comparison with the learning-based method Points2Poly, for reconstructing the building in Figure 2 with 60 planar segments. Points2Poly enumerates queries with signed distance values to learn a smooth boundary,
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Method & Learning & Accuracy (\%) \(\uparrow\) & Error (m) \(\downarrow\) & Error (\%) \(\downarrow\) \\ \hline
2.5D DC & ✗ & - & - & - \\ City3D & ✗ & - & 1.10 & 6.0 \\ Points2Poly & ✓ & 96.4 & 0.83 & 4.7 \\ \hline PolyGNN (ours) & ✓ & **96.4** & **0.81** & **4.7** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Accuracy comparison with 2.5D DC (Zhou and Neumann, 2010), City3D (Huang et al., 2022), and Points2Poly (Chen et al., 2022). The errors are measured by Hausdorff distance, which does not apply to 2.5D DC.
Figure 13: Qualitative comparison with state-of-the-arts. From left to right: input point cloud, 2.5D DC (Zhou and Neumann, 2010), City3D (Huang et al., 2022), Points2Poly (Chen et al., 2022), and PolyGNN (ours). The errors are measured by Hausdorff distance, which does not apply to 2.5D DC. Point clouds are rendered by their height fields.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Method & Label type & \#Queries _train_ & \#Queries _test_ & Efficiency _train_ & Efficiency _test_ \\ \hline Points2Poly w/ exh. & Class + value & 2,600,000 & 14,146,600 & 1x & 1x \\ Points2Poly w/ ada. & Class + value & 2,600,000 & 1,127,100 & 1x & 13x \\ \hline PolyGNN (ours) w/ exh. & Class & 174,112 & 174,112 & 15x & 81x \\ PolyGNN (ours) w/ ada. & Class & 13,872 & 13,872 & 187x & 1020x \\ \hline \hline \end{tabular}
\end{table}
Table 5: Efficiency comparison between Points2Poly (Chen et al., 2022) and ours, two learning-based methods. The building in Figure 2 with 60 planar segments is taken for calculating the number of queries. Efficiency is a calculated factor based on the number of queries, and the actual gain may differ due to parallelization.
while ours only requires discrete binary-class queries directly describing the piecewise planar surface. Meanwhile, the adaptive strategy significantly reduces the number of queries for both training and testing of our method as fewer polyhedra need to be considered as candidates. Secondly, the interface computation, which is necessary for assigning graph edge weights, contributes to the longer running time of Points2Poly. In contrast, PolyGNN can reconstruct a building directly by inferring the polyhedral occupancy, benefiting from parallelization within GPU.
### Applications and limitations
We have designed our framework to be generic. In principle, our proposed method can also be extended to handle other types of point clouds, such as photogrammetric ones. PolyGNN can also be used for the reconstruction of generic piecewise planar 3D objects beyond buildings.
For the evaluation to be orthogonal, we assume the availability of high-quality planar primitives extracted from point clouds. This assumption may not always be fulfilled with real-world measurements and therefore is considered a limitation of the proposed method and those of its kind. Figure 15 shows some failure cases. When reconstructing buildings with complex structures, PolyGNN encounters challenges in capturing fine details, such as intricate rooftop superstructures. This failure can be attributed to two factors. Firstly, the complexity of a building implies a larger and more intricate polyhedral embedding, which poses challenges to the network's prediction. Secondly, the training dataset predominantly consists of buildings with simple shapes, leading to an underrepresentation of complex structures. As a result, the network may have limited exposure to and understanding of complex architectural elements during training, contributing to the difficulties in capturing fine details during reconstruction.
## 6 Conclusion
We have introduced PolyGNN, a novel framework for urban building reconstruction with a polyhedron-based graph neural network. Instead of learning a continuous function with traditional deep implicit fields, our approach learns a piecewise planar occupancy function derived from polyhedral decomposition. We evaluated three sampling strategies for representing an arbitrary-shaped polyhedron within the neural network, where the skeleton variant exhibits superior performance. PolyGNN is end-to-end optimizable with simplicity and efficiency in design. Furthermore, we have developed PolyGNN on a large-scale synthetic building dataset comprised of 500k+ buildings furnished with comprehensive polyhedral labels and analyzed the transferability of our method on various data. We also evaluated the transferability of our methodology across varied data types. Qualitative and quantitative results demonstrate the effectiveness of PolyGNN, particularly in terms of efficiency.
Finally, we emphasize that there is still a performance gap between synthetic and real-world point clouds. In future work, we plan to further bridge the gap, enabling learning-based reconstruction methods to overcome abstraction and leverage a vast volume of training data more effectively. We will also investigate how to incorporate additional geometric attributes to enrich the polyhedral graph and to incorporate plane extraction into the end-to-end neural architecture.
|
2307.14237 | Evolving Multi-Objective Neural Network Controllers for Robot Swarms | Many swarm robotics tasks consist of multiple conflicting objectives. This
research proposes a multi-objective evolutionary neural network approach to
developing controllers for swarms of robots. The swarm robot controllers are
trained in a low-fidelity Python simulator and then tested in a high-fidelity
simulated environment using Webots. Simulations are then conducted to test the
scalability of the evolved multi-objective robot controllers to environments
with a larger number of robots. The results presented demonstrate that the
proposed approach can effectively control each of the robots. The robot swarm
exhibits different behaviours as the weighting for each objective is adjusted.
The results also confirm that multi-objective neural network controllers
evolved in a low-fidelity simulator can be transferred to high-fidelity
simulated environments and that the controllers can scale to environments with
a larger number of robots without further retraining needed. | Karl Mason, Sabine Hauert | 2023-07-26T15:05:17Z | http://arxiv.org/abs/2307.14237v1 | # Evolving Multi-Objective Neural Network Controllers for Robot Swarms
###### Abstract
Many swarm robotics tasks consist of multiple conflicting objectives. This research proposes a multi-objective evolutionary neural network approach to developing controllers for swarms of robots. The swarm robot controllers are trained in a low-fidelity Python simulator and then tested in a high-fidelity simulated environment using Webots. Simulations are then conducted to test the scalability of the evolved multi-objective robot controllers to environments with a larger number of robots. The results presented demonstrate that the proposed approach can effectively control each of the robots. The robot swarm exhibits different behaviours as the weighting for each objective is adjusted. The results also confirm that multi-objective neural network controllers evolved in a low-fidelity simulator can be transferred to high-fidelity simulated environments and that the controllers can scale to environments with a larger number of robots without further retraining needed.
Swarm Robotics Evolutionary Robotics Neural Networks Natural Evolution Strategies Evolutionary Algorithms Multi-Objective
## 1 Introduction
Many robotics tasks consist of multiple objectives. For example, manufacturing robots must accomplish tasks quickly and accurately [3]. These are conflicting objectives. Similarly, multiple objectives are also present in swarm robotics tasks. Minimizing both movement time and energy costs is a primary example of this [14].
Current approaches to addressing these multi-objective robotic tasks often involve applying a multi-objective optimisation algorithm to optimise the actions of the robot for different objectives, e.g. for path planning [17, 18]. These approaches work well for determining the optimum paths under multiple objectives for a specific environment configuration. If the environment changes, e.g. if more robots are added to the environment, these optimisation based techniques must be reapplied to determine the new set of optimal paths under the new conditions. This requires additional computational time.
An alternative solution to this multi-objective robotics problem is presented in this paper using Neuroevolution, or evolutionary neural networks [19]. Neuroevolution involves applying evolutionary algorithms to train the parameters of neural networks to solve a machine learning task, e.g. play Atari games [16], energy forecasting [15], CPU utilization prediction [15] and economic dispatch [14]. A multi-objective evolutionary neural network approach is proposed for developing swarm robot controllers. This proposed approach to developing multi-objective swarm robot controllers has the benefit of evolving a control policy that can still effectively control the robots even if the environmental conditions change and will not require the computational cost associated with retraining the robot controller.
The research presented in this paper makes the following contributions:
1. To propose a Natural Evolution Strategies based evolutionary multi-object neural network approach for swarm robotics.
2. To investigate the transferability of multi-objective neural network controllers trained in a low-fidelity simulator to a high-fidelity simulator.
3. To determine if controllers trained in an environment with a small number of robots, can scale to larger robot swarms.
The structure of the paper is as follows. Sections 2 will give an overview of the background literature relating to this research. Section 3 will outline the experimental methods, including simulator design, application of proposed approach and experiments conducted. Section 4 will present the results of the simulations conducted. Finally, Section 6 will present a conclusion for the research described in the paper.
## 2 Background
### Swarm Robotics
Robot swarms typically have characteristics including: acting autonomously within their environment, homogeneity across robots, and utilizing local information only [23]. Robot swarms have many practical applications [23], e.g. search and rescue [1] and warehouse operations [16].
There have been a number of publications that have explored the application of machine learning to robot swarms, e.g. in 2020 Tolstaya et al. applied a graph neural network to learn to control robots in a swarm [15]. A 2021 paper by Dorigo et al. mentions the application of machine learning to robot swarms as one of the future directions of research in swarm robotics [17].
Research published recently in the literature has recognised that the task of robot path planning often consists of multiple objectives [14], e.g. to minimize the time taken to locate a target, to minimise chance of collisions, to maximize energy efficiency, etc. Each of these objectives are important and should be considered when determining the behaviour of the robot. This motivates the research presented in this paper to develop robot controllers that can take preferences for each objective as input to influence the behaviour of robot swarms.
Multiple studies have been published in the literature that apply evolutionary methods to swarm robotics. Birattari et al. provide a manifesto for automatic off-line design in the area of robot swarms [18]. This paper discusses existing applications of evolutionary neural networks to robot controllers. Floreano et al. provide a comprehensive account of applications of evolutionary computing to robotics [19]. Evolutionary methods have been successfully applied in swarm robotics tasks. Hauert et al. applied evolutionary neural networks to control individuals in a swarm of simulated Micro Air Vehicles (MAVs) [15]. Evolutionary methods have also been applied to swarm robotics by evolving behaviour trees [11], Jones et al.(2018). Recent studies have applied MAP-Elites for swarm robotics [10]. These studies demonstrate the effectiveness of evolutionary methods for swarm robotics tasks. The research outlined in the paper builds on these previous studies by extending evolutionary swarm robot controllers to multi-objective tasks using Natural Evolution Strategies.
There have been a number of studies published in the literature that explore multi-objective control in robot swarms. Mai et al. developed a Multi-Objective collective search strategy for robot swarms based on the Particle Swarm Optimisation algorithm [16]. Miao et al. proposed a Multi-objective region reaching controller for a swarm of robots [16]. These studies demonstrate the utility of multi-objective control and path planning for robot swarms. These studies do not consider the use of multi-objective neural network controllers for robot swarm control, as this research paper presents.
### Evolutionary Neural Networks
Neural networks are machine learning models that take inspiration from the brain. The field of Evolutionary Neural Networks (or Neuroevolution) utilize evolutionary algorithms and principals to train the parameters of neural networks [16]. Target network outputs are not required when evolving neural networks, only a fitness function. Neuroevolution has been shown to be a competitive approach when compared to reinforcement learning algorithms [23], Mason and Grijalva(2021).
There are many Neuroevolution algorithms that evolve the weights and architecture of the neural network, e.g. NeuroEvolution of Augmenting Topologies (NEAT) [23] and hyperNEAT [15]. Many studies implement methods such as evolutionary strategies to evolve only the weights
of the network [Chen et al.(2019), Pourchot et al.(2018), Mason and Grijalva(2021)]. Covariance Matrix Adaption Evolutionary Strategy (CMA-ES) [Hansen et al.(2003)] and Natural Evolution Strategies (NES) [Wierstra et al.(2014)] are two well known evolutionary strategies. This research will use a variant of NES called Exponential Natural Evolution Strategies (xNES) [Glasmachers et al.(2010)] when evolving MO-NNs. It was selected as it is an effective optimisation algorithm.
Algorithm 1 describes the xNES algorithm. The algorithm samples \(\lambda\) normally distributed solutions \(z_{i}\). These are used to calculate \(\lambda\) solutions \(x_{i}=\mu+\sigma\textbf{B}z_{i}\), based on the center of the search distribution \(\mu\), the normalized covariance factor \(\textbf{B}=\textbf{A}/\sigma\), and \(\sigma\) is the scalar step size. When the algorithm begins, **A** is initialized as the identity matrix and \(\sqrt[4]{|det(\textbf{A})|}\), where \(d\) is the number of dimensions.
After solutions are sampled, the gradients of the objective function are calculated with respect to \(\delta,M,\sigma\) and **B**. Where M is a \(d\times d\) exponential map used to represent the covariance matrix **C**, where **C = AA\({}^{T}\)** and \(\delta\) is the change in **C**.
## 3 Experimental Methods
### Evolving Multi-Objective Neural Networks
This research consists of evolving neural networks for a multi-objective swarm robotics task. The pseudocode in Algorithm 2 illustrates the MO-NN training process.
``` Initialize \(\textbf{A}\), \(\textbf{B}\), \(\textbf{C}\), \(\textbf{
located at the front, back and on each side. Each robot can rotate \(\pm 45\deg\) and can only move forward up to a maximum velocity of \(2m/sec\). This simulator design is based on the DOTS swarm robot testbed [Jones et al.(2022)].
A high-fidelity simulator was also developed using Webots [Webots([n. d.]), Michel(2004)]. This was used to test the evolved controllers. The parameters of the Webots simulator are the same as the Python simulator. Figure 1 illustrates the simulated robot and arena.
Both simulators model non-holonomic robot drive. The low-fidelity simulator does not simulate any physics. Collision detection is implemented in the low-fidelity simulator to prevent a robot from colliding with another robot or the environment boundary. Physics is simulated in the high-fidelity Webots simulator. Collisions are not detected/prevented in the high-fidelity simulator.
Figure 1: Simulated robot (a) and arena (b).
### Multi-Objective Neural Network Robot Controllers
The implementation of the neural network controller as a robot controller is illustrated in Figure 2. The robot senses its environment using 4 rangefinder sensors. These measure the distance between the sensor and the nearest object. These values are normalized and passed to the network as input. In addition to sensor measurements, two additional inputs are passed into the network representing the weightings assigned to each of the two objectives \([w_{1},w_{2}]\). The network then does a forward pass and gives two outputs, i.e. robot commands. These commands are the rotation angle \([-45\deg,45\deg]\) and the forward velocity \([0m/sec,2m/sec]\). The robot will move to a new position based on these commands.
Figure 3 illustrates the training process of the MO-NNs for the robot swarm. Homogeneity is a key property of robot swarms, therefore the same network parameters are assigned to each NN controller for all robots in the swarm during each simulation.
The quality of each set of NN parameters \(x_{i}\) is measured using the fitness function outlined in Equation 1.
\[networkFit(NN(x_{i}))=\sum_{\Delta w=0}^{1}\sum_{t=1}^{t_{Max}}currentFit_{w, t}(NN(x_{i})) \tag{1}\]
where \(\Delta w\) is the change in objective weighting applied to \(w_{1}\) and \(w_{2}\). These are updated as \(w_{1}=1-\Delta w\) and \(w_{2}=\Delta w\)\(\therefore w_{1}+w_{2}=1\)\(\forall\)\(\Delta w\). The fitness at the current time-step for the current objective weights is calculated using Equation 2.
\[currentFit_{w,t}(NN(x_{i}))= w_{1}\times-1\times Obj1_{w,t}(NN(x_{i})) \tag{2}\] \[+w_{2}\times Obj2_{w,t}(NN(x_{i}))\]
where \(w_{1}\) is the weighting of objective 1 (\(Obj1\)) and \(w_{2}\) is the weighting of objective 2 (\(Obj2\)). Objective 1 is to minimize the distance between robots and the origin (center of the arena), calculated using Equation 3. Objective 2 is to maximize the velocity of the robots, calculated using Equation 4. These two objectives were chosen as they are in direct conflict with one another. It should be noted here that \(Obj1\) is multiplied by \(-1\) as the overall optimisation problem is framed as a maximization problem.
\[Obj1_{w,t}(NN(x_{i}))=\sum_{r=1}^{numRobots}[|position_{x,r}|+|position_{y,r}|] \tag{3}\]
Figure 2: Neural network controller.
where \(position_{x,r}\) and \(position_{y,r}\) represent the x and y positions of robot r, respectively.
\[Obj2_{w,t}(NN(x_{i}))=\sum_{r=1}^{numRobots}[|velocity_{x,r}|+|velocity_{y,r}|] \tag{4}\]
where \(velocity_{x,r}\) and \(velocity_{y,r}\) represent the x and y velocities of robot r, respectively.
During training, each set of NN parameters \(x_{i}\) is evaluated for 30 seconds of simulated time (\(t_{Max}=30sec\)) for each increment of \(\Delta w\). The simulator time step is 1 second. The value of \(\Delta w\) is incremented between 0 and 1 in increments of 0.5. This is to ensure that each network is evaluated based on its ability to control the robot such that each objective is optimised with maximum weighting, i.e. when \(w_{1}=1,w_{2}=0\) and \(w_{1}=0,w_{2}=1\), and also its ability to control the robot such that each objective is weighted equally, i.e. when \(w_{1}=0.5,w_{2}=0.5\). This number of \(\Delta w\) increments was selected to minimize training time. A smaller \(\Delta w\) increment can be implemented during training but would increase training time. Note, a smaller \(\Delta w\) increment can be applied when evaluating the MO-NN, irrespective of the increment size during training.
The NN architecture implemented in this research consisted of 6 input nodes (4 sensor inputs and 2 objective weights), 1 hidden layer with 5 nodes, and an outputs layer with 2 nodes (1 for rotation, 1 for forward movement). The network was evolved over 20,000 evaluations.
### Experimental Setup
1. **Evolving Neural Network Controller**. This experiment consists of evolving the multi-objective neural network controller in the low-fidelity python robot simulator. The MO-NN is evolved in the low-fidelity python simulator with the Objective (obj) preferences: MO-NN 1: Obj 1 - Maximize velocity. Obj 2 - Minimize distance to origin (center of arena).
2. **Deployment to High-Fidelity Webots Simulator** The next experiment is to test the performance of the best performing network evolved in the low-fidelity python simulator by deploying the network to control robots in a more realistic high-fidelity Webots simulator.
Figure 3: Evolving neural network swarm robot controller.
3. **Evaluating Evolved Controller on Larger Swarm Sizes** The final experiment is to determine the scalability of the evolved MO-NN to a larger number of robots, specifically 5 and 10 robots.
## 4 Results
### Evolving Neural Network Controller in Low-Fidelity Python Simulator
The evolved MO-NNs were capable of controlling the robot swarm in the desired manner in the low-fidelity python simulator. Figure 4 illustrate the trajectories for the evolved MO-NN.
The trajectories in Figure 4 refer to the training objectives: Objective 1 - Maximize velocity and Objective 2 - Minimize distance to origin (center of arena). It is clear from Figure 4 that the robot trajectories are significantly different when a high weighting is given to maximizing velocity compared to when a high weighting is given to minimizing distance to the origin. When a high weighting is given to maximizing velocity, the robots move in large circles around the arena. When a high weighting is given to minimizing distance to the origin, the robots move to the center of the arena and stay there. When an equal weighting is given to both objectives, the robots traverse the arena close to the center in smaller circles.
Figure 5 illustrates how the each of the objective function evaluations of the behaviour of the evolved neural network change as the objective weighting changes. When a maximum weighting is given to minimizing the distance to the origin objective, both the distance and velocity objective scores are lowest. Conversely when maximizing velocity, the distance to the origin also increases as the robots are moving with higher velocity around their environment. This graph illustrates how different behaviour can be observed from the swarm of robots using a single neural network by simply modifying the weighting for each objective function. No retraining is required for different objectives or different robots. This is the key advantage of the proposed MO-NN approach.
### Testing Evolved Controllers in High-Fidelity Webots Simulator
After training in a low-fidelity Python simulator, the evolved MO-NN robot controller was then deployed to control robots implemented in the high-fidelity Webots simulator. The motivation for doing this was to test whether the behaviours of the evolved controllers have the potential to translate to simulators with more realistic physics without the need to adapt the motor commands.
Figure 4: Low-fidelity simulator robot trajectories: Maximize velocity objective (left). Equal objective preference (middle). Minimize distance to origin (right).
Figure 5: Pareto Front for evolving multi-objective neural network controller.
Two simulations were conducted. In the first simulation, the robots' objective preference was to minimize distance to the origin, i.e., all robots were passed an objective weighting of 1 for the minimize distance to the origin objective, and 0 for the maximize velocity objective. In the second simulation, the robots' objective preference was to maximize velocity. It was observed that the trained MO-NNs exhibited similar behaviour when tested in the Webots environment.
When minimizing distance to the origin, the robots moved much slower. The robots gradually made their way to the center of the arena and do not adjust their position much thereafter. Each robot continually rotates in order to sense more of its environment.
When maximizing velocity, the robots move continuously around the arena and do not stop at the center. There is a greater risk of collisions when moving with higher velocity. In order to reduce the severity of the collisions, the maximum wheel rotational speed was reduced from 80 radians/sec to 60 radians/sec. Without this speed reduction, robots collided with one another, overturned and remained immobilized. Note, this velocity clamping was applied in both simulations reported.
### Larger Robot Swarms in High-Fidelity Webots Simulator
The next set of simulations conducted was to establish if the evolved MO-NN can scale to larger swarm sizes without retraining. The motivation for this was to test if the controllers evolved in an environment with fewer robots are robust enough to scale to environments with more robots and therefore greater chance of collisions. In order to test this, simulations were conducted for robot swarm sizes of 5 and 10 robots, using the evolved MO-NN trained in a swarm size of 3.
It was observed that the robot swarm gather at the center of the arena after 60 seconds in all robot swarm sizes when minimizing distance to the origin. Different behaviour was observed when maximizing velocity. After 60 seconds, the robots are dispersed around the arena as they are traversing the arena with higher velocity.
Figure 6 illustrates the spread in the distance to the origin and the velocity averaged over 10 robots at each second in the 10 robot swarm for 60 seconds. This graph clearly illustrates how the robots travel with significantly higher velocity when the weighting on the velocity objective is at its maximum, compared to when a maximum weighting is applied to the distance to origin objective. This difference is statistically significant when compared using the two tailed Wilcoxon signed rank test, with significance level \(\alpha=1\%\). Similarly, when comparing the distance to the origin at each second, it can be seen that the distance is significantly lower when maximizing the weighting for the distance objective compared to maximizing the velocity objective, with a significance level \(\alpha=1\%\).
Figure 6: Distribution of distance to origin (left) and velocity (right) when varying objective weighting.
Figure 7 presents a heatmap of the positions of all robots in the 10 robot swarm over 60 seconds when maximizing velocity (a) and minimizing distance to the origin (b). The figure on the left illustrates how the robots are more dispersed throughout the map when maximizing velocity. When minimizing distance to the origin, the robot positions are concentrated heavily in the center of the map. This is as expected.
Figure 8 presents the robot velocity (left) and distance to origin (right) over 60 seconds of simulation time when maximizing the weighting to each objective. Under both objective preferences, the average velocity of the swarm is slow initially. When minimizing the distance to the origin, there is a large spike in velocity at time step 8. This corresponds in a significant reduction in the average distance to the origin. There are multiple smaller spikes in average velocity which further reduce the swarm distance to the origin. When maximizing velocity, there is a similar large increase in velocity early in the simulation. The average velocity of the robots then stabilize with some oscillations thereafter.
## 5 Conclusion
This research proposed an evolutionary multi-objective (MO) neural network (NN) for robot swarm control. The MO-NN was evolved using a low-fidelity Python simulator in an environment with 3 robots. The controller was then tested in a high-fidelity simulated environment developed using Webots. The MO-NN controller was then evaluated for a larger numbers of robots.
The primary findings of this research are:
Figure 8: 10 robot swarm average velocity (left) and distance to origin (right) over 60 seconds.
Figure 7: Position heatmaps of 10 robots over 60 seconds when maximizing velocity (a) and minimizing distance to origin (b). |
2303.09949 | Towards a Foundation Model for Neural Network Wavefunctions | Deep neural networks have become a highly accurate and powerful wavefunction
ansatz in combination with variational Monte Carlo methods for solving the
electronic Schr\"odinger equation. However, despite their success and favorable
scaling, these methods are still computationally too costly for wide adoption.
A significant obstacle is the requirement to optimize the wavefunction from
scratch for each new system, thus requiring long optimization. In this work, we
propose a novel neural network ansatz, which effectively maps uncorrelated,
computationally cheap Hartree-Fock orbitals, to correlated, high-accuracy
neural network orbitals. This ansatz is inherently capable of learning a single
wavefunction across multiple compounds and geometries, as we demonstrate by
successfully transferring a wavefunction model pre-trained on smaller fragments
to larger compounds. Furthermore, we provide ample experimental evidence to
support the idea that extensive pre-training of a such a generalized
wavefunction model across different compounds and geometries could lead to a
foundation wavefunction model. Such a model could yield high-accuracy ab-initio
energies using only minimal computational effort for fine-tuning and evaluation
of observables. | Michael Scherbela, Leon Gerard, Philipp Grohs | 2023-03-17T16:03:10Z | http://arxiv.org/abs/2303.09949v1 | # Towards a Foundation Model for Neural Network Wavefunctions
###### Abstract
Deep neural networks have become a highly accurate and powerful wavefunction ansatz in combination with variational Monte Carlo methods for solving the electronic Schrodinger equation. However, despite their success and favorable scaling, these methods are still computationally too costly for wide adoption. A significant obstacle is the requirement to optimize the wavefunction from scratch for each new system, thus requiring long optimization. In this work, we propose a novel neural network ansatz, which effectively maps uncorrelated, computationally cheap Hartree-Fock orbitals, to correlated, high-accuracy neural network orbitals. This ansatz is inherently capable of learning a single wavefunction across multiple compounds and geometries, as we demonstrate by successfully transferring a wavefunction model pre-trained on smaller fragments to larger compounds. Furthermore, we provide ample experimental evidence to support the idea that extensive pre-training of a such a generalized wavefunction model across different compounds and geometries could lead to a foundation wavefunction model. Such a model could yield high-accuracy ab-initio energies using only minimal computational effort for fine-tuning and evaluation of observables.
## 1 Introduction
Accurate predictions of quantum mechanical properties for molecules is of utmost importance for the development of new compounds, such as catalysts, or pharmaceuticals. For each molecule the solution to the Schrodinger equation yields the wavefunction and electron density, and thus in principle gives complete access to its chemical properties. However, due to the curse of dimensionality, computing accurate approximations to the Schrodinger equation quickly becomes computationally intractable with increasing number of particles. Recently, deep-learning-based Variational Monte Carlo (DL-VMC) methods have emerged as a high-accuracy approach with favorable scaling \(\mathcal{O}(N^{4})\) in the number of particles \(N\)[1]. These methods use a deep neural network as ansatz for the high-dimensional wavefunction, and minimize the energy of this ansatz to obtain the ground-state wavefunction. Based on two major architectures for the treatment of molecules in first quantization, PauliNet [1] and FermiNet [2], several improvements and applications have emerged. On the one hand, enhancements of architecture, optimization and overall approach have led to substantial improvements in accuracy or computational cost [3, 4, 5, 6, 7]. On the other hand, these methods have been adapted to many different systems and observables: model systems of solids [8, 9], real solids [10], energies and properties of individual molecules [1, 2, 11, 5], forces [12, 13], excited states [14] and potential energy surfaces [13, 15, 16]. Furthermore, similar methods have been developed and successfully applied to Hamiltonians in second quantization [17, 18].
We want to emphasize that DL-VMC is an ab-initio method, that does not require any input beyond the Hamiltonian, which is defined by the molecular geometry. This differentiates it from surrogate models, which are trained on results from ab-initio methods to either predict wavefunctions [19, 20] or observables [21].
Despite the improvements in DL-VMC, it has not yet been widely adopted, in part due to the high computational cost. While DL-VMC offers favorable scaling, the method suffers from a large prefactor, caused by an expensive optimization with potentially slow convergence towards accurate approximations. Furthermore this optimization needs to be repeated for every new system, leading to prohibitively high computational cost for large-scale use. This can be partially overcome by sharing a single ansatz with identical parameters across different geometries of a compound, allowing more efficient computation of Potential Energy Surfaces (PES) [13, 15, 16]. However, these approaches have been limited to different geometries of a single compound and do not allow successful transfer to new compounds. A key reason for this limitation is that current architectures explicitly depend on the number of orbitals (and thus electrons) in a molecule. Besides potential generalization issues, this prevents a transfer of weights between different compounds already by the fact that the shapes of weight matrices are different for compounds of different size.
In this work we propose a novel neural network ansatz, which does not depend explicitly on the number of particles, allowing to optimize wavefunctions across multiple compounds with multiple different geometric conformations. We find, that our model exhibits strong generalization when transferring weights from small molecules to larger, similar molecules. In particular we find that our method achieves high accuracy for the important task of relative energies. Inspired by the success of foundation models in language [22] or vision [23, 24] - which achieve
high accuracy with minimal fine-tuning of an extensively pre-trained base-model - we train a first base-model for neural network wavefunctions.
We evaluate our pre-trained wavefunction model by performing few-shot predictions on chemically similar molecules (in-distribution) and disparate molecules (out-of-distribution). We find that our ansatz outperforms conventional high-accuracy methods such as CCSD(T)-ccpVTZ and that fine-tuning our pre-trained model reaches this accuracy \(\approx\)20x faster, than optimizing a new model. When analyzing the accuracy as a function of pre-training resources, we find that results systematically and substantially improve by scaling up either the model size, data size or number of pre-training steps. These results could pave the way towards a foundation wavefunction model, to obtain high-accuracy ab-initio results of quantum mechanical properties using only minimal computational effort for fine-tuning and evaluation of observables.
Additionally we compare our results to GLOBE, a concurrent work [25], which proposes reparameterization of the wavefunction based on machine-learned, localized molecular orbitals. We find that our method in comparison achieves lower absolute energies, higher accuracy of relative energies and is better able to generalize across chemically different compounds.
## 2 Results
In the following, we briefly outline our approach and how it extends existing work in Sec. 2.1. We show the fundamental properties of our ansatz such as extensivity (Sec. 2.3) and equivariance with respect to the sign of reference orbitals (Sec. 2.4). We demonstrate the transferability of the ansatz when pre-training on small molecules and re-using it on larger, chemically similar molecules. We also compare its performance against GLOBE, a concurrent pre-print [25] in Sec. 2.5. Lastly, we present a first wavefunction base-model pre-trained on a large diverse dataset of 360 geometries and evaluate its downstream performance in Sec. 2.6.
### A multi-compound wavefunction ansatz
Existing high-accuracy ansatze for neural network wavefunctions all exhibit the following structure:
\[\mathbf{h}_{i}=h_{\theta}(\mathbf{r}_{i},\{\mathbf{r}\},\mathbf{R},\mathbf{Z}) \tag{1}\] \[\Phi_{ik}^{d}=\varphi_{dk}(\mathbf{r}_{i})\sum_{\nu=1}^{D_{\rm emb}}F _{k\nu}^{d}h_{i\nu}\] (2) \[\psi=\sum_{d=1}^{N_{\rm det}}\det\left[\Phi_{ik}^{d}\right]_{i,k= 1\ldots n_{\rm el}} \tag{3}\]
Eq. 1 computes a \(D_{\rm emb}\)-dimensional embedding of electron \(i\), by taking in information of all other particles, e.g. by using attention or message passing. Eq. 2 maps these high-dimensional embeddings onto \(n_{\rm el}\times N_{\rm det}\) orbitals (indexed by \(k\)), using trainable backflow matrices \(\mathbf{F}^{d}\) and typically trainable envelope functions \(\varphi_{dk}\). Eq. 3 evaluates the final wavefunction \(\psi\) as a sum of (Slater-)determinants of these orbitals, to ensure antisymmetry with respect to permutation of electrons.
While this approach works well for the wavefunctions of a single compound, it is fundamentally unsuited to represent wavefunctions for multiple different compounds at once. The main problem lies in the matrices \(\mathbf{F}^{d}\), which are of shape \([n_{\rm el}\times D_{\rm emb}]\), and thus explicitly depend on the number of electrons. There are several potential options, how this challenge could be overcome. A
naive approach would be to generate a fixed number of \(N_{\mathrm{orb}}>n_{\mathrm{el}}\) orbitals and truncate the output to the required number of orbitals \(n_{\mathrm{el}}\), which may differ across molecules. While simple to implement, this approach is however fundamentally limited to molecules with \(n_{\mathrm{el}}\leq N_{\mathrm{orb}}\). Another approach is to use separate matrices \(\mathbf{F}_{\mathcal{G}}^{d}\) for each molecule or geometry \(\mathcal{G}\), as was done in [13], but also this approach can fundamentally not represent wavefunctions for molecules that are larger than the ones found in the training set. A third approach would be to not generate all orbitals in a single pass, but generate the orbitals sequentially in an auto-regressive manner, by conditioning each orbital on the previously generated orbitals. While this approach has been successful in other domains such as language processing, it suffers from inherently poor parallelization due to its sequential nature. A final approach - chosen in this work - is to replace the matrix \(\mathbf{F}\) with a trainable function \(f_{\theta}^{a}(\mathbf{c}_{Ik})\), which computes the backflows based on some descriptor \(\mathbf{c}_{Ik}\) of the orbital \(k\) to be generated:
\[h_{i\nu}=h_{\theta}(\mathbf{r}_{i},\{\mathbf{r}\},\{\mathbf{R}\},\{\mathbf{Z} \})_{\nu}\] \[\varphi_{\theta}^{d}(\mathbf{r}_{i},\mathbf{R}_{I},\mathbf{c}_{Ik})=\exp{(-| \mathbf{r}_{i}-\mathbf{R}_{I}|\,g_{\theta}^{s}(\mathbf{c}_{Ik})_{d})}\] \[\Phi_{ik}^{d}=\sum_{I=1}^{N_{\mathrm{muc}}}\varphi_{\theta}^{d}( \mathbf{r}_{i},\mathbf{R}_{I},\mathbf{c}_{Ik})\sum_{\nu=1}^{D_{\mathrm{emb}}}f_{\theta}^{ a}(\mathbf{c}_{Ik})_{d\nu}h_{i\nu}\]
While there are several potential descriptors \(\mathbf{c}_{Ik}\) for orbitals, one particularly natural choice is to use outputs of computationally cheap, conventional quantum chemistry methods such as Density Functional Theory or Hartree-Fock. We compute orbital features based on the expansion coefficients of a Hartree-Fock calculation, by using orbital localization and a graph convolutional network (GCN), as outlined in Sec 4.2. We then map these features to _transferable atomic orbitals (TAOs)_\(\Phi_{ik}^{d}\), using (anti-)symmetric functions \(f_{\theta}^{a}\) and \(g_{\theta}^{s}\) as illustrated in Fig. 1.
### Properties of our ansatz
These TAOs fulfil many properties, which are desirable for a wavefunction ansatz:
* **Constant parameter count**: The number of parameters in the ansatz is independent of system size. In previous approaches [1, 2, 13] the number of parameters grows with the number of
Figure 1: Illustration of the Transferrable Atomic Orbitals, demonstrated on the C=C-bond of Ethene.
particles, making it impossible to use a single ansatz across systems of different sizes. In particular backflows and envelope exponents have typically been chosen as trainable parameters of shape [\(N_{\mathrm{orb}}\times N_{\mathrm{det}}\)]. In our ansatz the backflows \(\mathbf{F}\) are instead computed by a single function \(f_{\theta}\) from multiple inputs \(\mathbf{c}_{Ik}\).
* **Equivariant to sign of HF-orbital**: Orbitals of a HF-calculation are obtained as eigenvectors of a matrix and are thus determined only up to their sign (or their phase in the case of complex eigenvectors). We enforce that the functions \(f_{\theta}^{a}\), \(g_{\theta}^{s}\) are (anti-)symmetric with respect to \(\mathbf{c}_{Ik}\). Therefore our orbitals \(\Phi_{ik}^{d}\) are equivariant to a flip in the sign of the HF-orbitals used as inputs: \(\Phi(-\mathbf{c}_{Ik})=-\Phi(\mathbf{c}_{Ik})\). Therefore during supervised pre-training, the undetermined sign of the reference orbitals becomes irrelevant, leading to faster convergence as demonstrated in Sec. 2.4.
* **Locality**: When using localized HF-orbitals as input, the resulting TAOs are also localized. Localized HF-orbitals are orbitals which have non-zero orbital features \(\widetilde{\mathbf{c}}_{Ik}\) only on some subset of atoms. Since we enforce the backflow \(f_{\theta}^{a}\) to be antisymmetric (and thus \(f^{a}(\mathbf{0})=\mathbf{0}\)), the resulting TAOs have zero contribution from atoms \(I\) with \(\mathbf{c}_{Ik}=\mathbf{0}\).
* **High expressivity**: We empirically find that our ansatz is sufficiently expressive to model ground-state wavefunctions to high accuracy. This is in contrast to previous approaches which were based on incorporating ab-initio orbitals [1], which could not reach chemical accuracy even for small molecules.
### Size consistency of the ansatz
One design goal of the ansatz is to allow transfer of weights from small systems to larger systems. In particular, if a large system consists of many small previously seen fragments, one would hope to obtain an energy which corresponds approximately to the sum
Figure 2: Zero-shot transferability of the ansatz to chemically similar, larger systems. While Moon cannot successfully transfer to larger chains, our ansatz successfully predicts zero-shot energies for up to 2x larger molecules and achieves very high accuracy with little pre-training.
of the fragment energies. One simple test case, are chains of equally spaced Hydrogen atoms of increasing lengths. These systems have been studied extensively using high-accuracy methods [26], because they are small systems which already show strong correlation and are thus challenging to solve. We test our method by pre-training our ansatz on chains of length 6 and 10, and then evaluating the model (with and without subsequent fine-tuning) for chain lengths between 2 and 28. Fig. 2 shows that our ansatz achieves very high zero-shot-accuracy in the interpolation regime (\(n_{\mathrm{atoms}}\) = 10) and for extrapolation to slightly larger or slightly smaller chains (\(n_{\mathrm{atoms}}\) = 4, 12). Even when extrapolating to systems of twice the size (\(n_{\mathrm{atoms}}\) = 20), our method still outperforms a Hartree-Fock calculation and eventually converges to an energy close the the Hartree-Fock solution. Fine-tuning the pre-trained model for only 500 steps, yields near perfect agreement with the specialized MRCI+Q method.
This good performance stands in stark contrast to other approaches such as GLOBE+ FermiNet or GLOBE+Moon, studied in [25]: Both GLOBE-variants yield 5-6x higher errors in the interpolation regime and both converge to much higher energies for larger chains. While our approach yields Hartree-Fock-like energies for very long chains, GLOBE+ FermiNet and GLOBE+Moon yield results that are outperformed even by assuming a chain of non-interacting H-atoms, which would yield an energy per atom of -0.5 Ha. For modest extrapolations (\(n\) = 12 to \(n\) = 20) our zero-shot results yield 3 - 20x lower errors than GLOBE+Moon.
### Equivariance with respect to HF-phase
Due to the (anti-)symmetrization of the TAOs, our orbitals are equivariant with respect to a change of sign of the Hartree Fock orbitals. Therefore, a sign change of the HF-orbitals during HF-pre-training has no effect on the
Figure 3: Accuracy when HF-pre-training against rotated H\({}_{2}\)O molecules, which contain a change of sign in the Hartree-Fock-p-orbitals of the Oxygen atom. Comparing a shared optimization of a backflow-based neural network wavefunction (Standard backflow) against TAOs. Top: HF-pre-training loss averaged over 20 geometries and the last 100 samples for each rotation angle. Bottom: Mean energy error vs. reference, averaged across all geometries.
optimization of the wavefunction. One test case to assure this behaviour is the rotation of a H\({}_{2}\)O molecule, where we consider a set of 20 rotations of the same geometry, leading to a change of sign in the p-orbitals of the Oxygen atom (cf. Fig. 3). We evaluate our proposed architecture and compare it against a naive approach, where we use a standard backflow matrix \(\boldsymbol{F}\), instead of a trainable, anti-symmetrized function \(f_{\theta}^{a}\). In Fig. 3 we can see a clear spike in the HF-pre-training loss at the position of the sign flip for the standard backflow-type architecture, causing slower convergence during the subsequent variational optimization. Although, in this specific instance the orbital sign problem could also be overcome, without our approach by correcting the phase of each orbital to align them across geometries, phase alignment is not possible in all circumstances. For example, there are geometry trajectories, where the Berry phase prevents such solutions [27].
### Transfer to larger, chemically similar compounds
To test the generalization and transferability of our approach, we perform the following experiment: First, we train our ansatz on a dataset of multiple geometries of a single, small compound (e.g. 20 distorted geometries of Methane). For this training, we follow the usual procedure of supervised HF-pre-training and subsequent variational optimization as outlined in Sec. 4.1. After 64k variational optimization steps, we then re-use the weights for different geometries of a larger compound (e.g. distorted geometries of Etiene). We fine-tune the model on this new geometry dataset for a variable number of steps and plot the resulting energy errors in Fig. 4. We do not require supervised HF-pre-training on the new, larger dataset. We perform this experiment for 3 pairs of test systems: Transferring from geometries of Hydrogen-chains with 6 atoms each, to chains with 10 atoms each, transferring from Methane to Etiene, and transferring from Etiene to Cyclobutadiene.
We compare our results to the earlier DeepErwin approach [13], which only partially reused weights, and GLOBE, a concurrent preprint [25] which reuses all weights. To measure accuracy we compare two important metrics: First, the mean energy error (averaged across all geometries \(g\) of the test dataset) \(\frac{1}{N}\sum_{g}(E_{g}-E_{g}^{\mathrm{ref}})\), which reflects the method's accuracy for absolute energies. Second, the maximum relative energy error \(\max_{g}(E_{g}-E_{g}^{\mathrm{ref}})-\min_{g}(E_{g}-E_{g}^{\mathrm{ref}})\), which reflects the method's consistency across a potential energy surface. Since different studies use different batch-sizes and different definitions of an epoch, we plot all results against the number of MCMC-samples used for variational optimization, which is very closely linked to computational cost.
Compared to other approaches, we find that our method yields substantially lower and more consistent energies. On the toy problem of H\({}_{6}\) to H\({}_{10}\) our approach and GLOBE reach the same accuracy, while DeepErwin converges to higher energies. For the actual molecules Etiene (C\({}_{2}\)H\({}_{4}\)) and Cyclobutadiene (C\({}_{4}\)H\({}_{4}\)) our approach reaches substantially lower energies and much more consistent potential energy surfaces. When inspecting the resulting Potential Energy Surface for Etiene, we find that we obtain qualitatively similar results as DeepErwin, but obtain energies that are \(\approx\) 6 mHa lower (and thus more accurate). GLOBE on the other hand does not yield a qualitatively correct PES for this electronically challenging problem. It overestimates the energy barrier at 90\({}^{\circ}\) twist angle by \(\approx\) 50 mHa and yields a spurious local minimum at 10\({}^{\circ}\). We
observe similar results on the Cyclobutadiene geometries, where our approach yields energy differences that are in close agreement to the reference energies, while the GLOBE-results overestimate the energy difference by \(\approx\) 20 mHa.
### Towards a first foundation model for neural network wavefunctions
While the experiments in Sec. 2.5 demonstrate the ability to pre-train our model and fine-tune it on a new system, the resulting pre-trained models are of little practical use, since they are only pre-trained on a single compound each and can thus not be expected to generalize to chemically different systems. To obtain a more diverse pre-training dataset, we compiled
Figure 4: Accuracy when pre-training the model on small compounds and reusing it for larger compounds. Top: Mean energy error vs reference, averaged across all geometries of the test set. Middle: Error of relative energies, measured as \(\max_{g}(E_{g}-E_{g}^{\mathrm{ref}})-\min_{g}(E_{g}-E_{g}^{\mathrm{ref}})\). Bottom: Final PES for the Ethene molecule for each method.
a dataset of 360 distorted geometries, spread across 18 different compounds. The dataset effectively enumerates all chemically plausible molecules with up to 18 electrons containing the elements H, C, N, and O. For details on the data generation see Appendix A. We pre-train a base-model for 500,000 steps on this diverse dataset and subsequently evaluate its performance, when computing Potential Energy Surfaces. We evaluate its error both for compounds that were in the pre-training dataset (with different geometries), as well as for new, larger, out-of-distribution compounds which were not present in the pre-training dataset. We compare the results against a baseline model, which uses the default method of supervised HF-pre-training and subsequent variational optimization.
Fig. 5 shows that fine-tuning this pre-trained model yields substantially lower energies than the usual optimization from a HF-pre-trained model. For example, when optimizing for 8k steps, we obtain 8x lower energy errors for large out-of-distribution compounds, and 12x lower energies for small in-distribution compounds. When evaluating the model for up to 32k steps, we find that for small molecules both approaches converge to the same energy. For large molecules the final energy error obtained by fine-tuning the base-model is 3x lower than optimization of a HF-pre-trained model.
### Scaling behaviour
In many domains, increasing the amount of pre-training, has led to substantially better results, even without qualitative changes to the architecture [28]. To investigate the scalability of our approach, we vary the three key choices, along which one could increase the scale of pre-training: The size of the wavefunction model, the number of compounds and geometries present in the pre-training-dataset, and the number of pre-training steps. Starting from a large model, trained on 18x20 geometries, for 256k pre-training steps, we independently vary each parameter. We test 3 different architectures sizes, with decreasing layer width and depth for the networks \(f_{\theta}\), \(g_{\theta}\), and GCN\({}_{\theta}\) (cf. Appendix C). We test 3 different training sets, with decreasing number of compounds in the training set, with 20 geometries each (cf. Appendix A). Finally, we evaluate model-checkpoints at different amounts of pre-training, ranging from 64k steps to 512k steps. Fig. 6 depicts the accuracy obtained by subsequently fine-tuning the resulting model for just 4000 steps on the evaluation set. In each case, increasing the scale of pre-training clearly
Figure 5: Fine-tuning of pre-trained foundation model (solid lines) vs. fine-tuning Hartree-Fock-pre-trained models (dashed lines) for 70 different geometries. Blue: Small compounds, with geometries similar to geometries in pre-training dataset. Orange: Larger compounds outside pre-training dataset.
improves evaluation results - both for the small in-distribution compounds, as well as the larger out-of-distribution compounds. We find a strong dependence of the accuracy on the model size and number of compounds in the pre-training dataset, and a weaker dependency on the number of pre-training steps. While our computational resources, currently prohibit us from training at larger scale, the results indicate that our approach may already be sufficient to train an accurate multi-compound, multi-geometry foundation model for wavefunctions.
## 3 Discussion
This work presents an ansatz for deep-learning-based VMC, which can in principle be applied to molecules of arbitrary size. We demonstrate the favourable properties of our ansatz, such as extensivity, zero-shot prediction of wavefunctions for similar molecules (Sec. 2.3), invariance to the phase of orbitals (Sec. 2.4) and fast finetuning for larger, new molecules (Sec. 2.5). Most importantly, Sec. 2.6 is, to our knowledge, the first successful demonstration of a general wavefunction, which has successfully been trained on a diverse dataset of compounds and geometries. We demonstrate that the dominating deep-learning paradigm of the last years - pre-training on large data and fine-tuning on specific problems - can also be applied to the difficult problem of wavefunctions. While previous attempts [13, 25] have failed to obtain high-accuracy energies from pre-trained neural network wavefunctions, we find that our approach yields accurate energies and does so at a fraction of the cost needed without pre-training. We furthermore demonstrate in Sec. 2.7 that results can be improved systematically by scaling up any aspect of the pre-training: Model size, data-size, or pre-training-steps.
Figure 6: Error when fine-tuning the pre-trained model for 4000 steps on small in-distribution geometries and larger out-of-distribution geometries.
Despite these promising results, there are many open questions and limitations which should be addressed in future work. First, we find that our ansatz currently does not fully match the accuracy of state-of-the-art single-geometry DL-VMC ansatze. While our approach consistently outperforms conventional variational methods such as MRCI or CCSD at finite basis set, larger, computationally more expensive DL-VMC models can reach even lower energies. Exchanging our message-passing-based electron-embedding, with recent attention based approaches [4] should lead to higher accuracy. Furthermore we have made several deliberate design choices, which each trade-off expressivity (and thus potentially accuracy) for computational cost: We do not exchange information across orbitals and we base our orbitals on computationally cheap HF-calculations. Including attention or message passing across orbitals (e.g. similar to [25]), and substituting HF for a trainable, deep-learning-based model should further increase expressivity. While we currently use HF-orbitals due to their widespread use and low computational cost, our method does not rely on a specific orbital descriptor. We could substitute HF for a separate model such as PhysNet [20] or SchnOrb [29] to compute orbital descriptors \(\mathbf{c}_{Ik}\), leading to a fully end-to-end machine-learned wavefunction. Second, while we include useful physical priors such as locality, we do not yet currently use the invariance of the Hamiltonian with respect to rotations, inversions or spin-flip. E3-equivariant networks have been highly successful for neural network force-fields, but have not yet been applied to wavefunctions due to the hitherto unsolved problem of symmetry breaking [15]. Using HF-orbitals as symmetry breakers, could open a direct avenue towards E3-equivariant neural network wavefunctions. Third, while we use locality of our orbitals as a useful prior, we do not yet use it to reduce computational cost. By enforcing sparsity of the localized HF-coefficients, one could limit the evaluation of orbitals to a few participating atoms, instead of all atoms in the molecule. While the concurrent GLOBE approach enforces its orbitals to be localized at a single position, our approach naturally lends itself to force localization at a given number of atoms, allowing for a deliberate trade-off of accuracy vs. computational cost. Lastly, we observe that our method performs substantially better, when dedicating more computational resources to the pre-training. While we are currently constrained in the amount of computational resources, we hope that future work will be able to scale up our approach. To facilitate this effort we open source our code, dataset as well as model parameters.
## 4 Methods
### Variational Monte Carlo
Considering the Born-Oppenheimer approximation, a molecule with \(n_{\mathrm{el}}\) electrons and \(N_{\mathrm{nuc}}\) nuclei can be described by the time-independent Schrodinger equation
\[\hat{H}\psi=E\psi \tag{4}\]
with the Hamiltonian
\[\hat{H}= -\frac{1}{2}\sum_{i}\nabla_{\mathbf{r}_{i}}^{2}+\sum_{i>j}\frac{1}{| \mathbf{r}_{i}-\mathbf{r}_{j}|}\] \[+\sum_{I>J}\frac{Z_{I}Z_{J}}{|\mathbf{R}_{I}-\mathbf{R}_{J}|}-\sum_{i,I} \frac{Z_{I}}{|\mathbf{r}_{i}-\mathbf{R}_{I}|} \tag{5}\]
By \(\mathbf{r}=(\mathbf{r}_{1},\dots,\mathbf{r}_{n_{\uparrow}},\dots,\mathbf{r}_{n_{\mathrm{el}}} )\in\mathbb{R}^{3\times n_{\mathrm{el}}}\) we denote the set of electron positions divided into \(n_{\uparrow}\) spin-up and \(n_{\downarrow}\) spin-down electrons. For the coordinates and charges of the nuclei we
write \(\mathbf{R}_{I}\), \(Z_{I}\), with \(I\in\{1,\ldots,N_{\mathrm{nuc}}\}\). The solution to the electronic Schrodinger equation \(\psi\) needs to fulfill the anti-symmetry property, i.e. \(\psi(\mathcal{P}\mathbf{r})=-\psi(\mathbf{r})\) for any permutation \(\mathcal{P}\) of two electrons of the same spin. Finding the groundstate wavefunction of a system, corresponds to finding the solution to Eq. 4, with the lowest eigenvalue \(E_{0}\). Using the Rayleigh-Ritz principle, an approximate solution can be found through minimization of the loss
\[\mathcal{L}(\psi_{\theta})=\mathbb{E}_{\mathbf{r}\sim\psi_{\theta}^{2}(\mathbf{r})} \left[\frac{(\hat{H}\psi_{\theta})(\mathbf{r})}{\psi_{\theta}(\mathbf{r})}\right]\geq E _{0}, \tag{6}\]
using a parameterized trial wavefunction \(\psi_{\theta}\). The expectation value in Eq. 6 is computed by drawing samples \(\mathbf{r}\) from the unnormalized probability distribution \(\psi_{\theta}^{2}(\mathbf{r})\) using Markov Chain Monte Carlo (MCMC). The effect of the Hamiltonian on the wavefunction can be computed using automatic differentiation and the loss is minimized using gradient based minimization. A full calculation typically consists of three steps:
1. **Supervised HF-pre-training**: Minimization of the difference between the neural network ansatz and a reference wavefunction (e.g. a Hartree-Fock calculation) \(||\psi_{\theta}-\psi^{\mathrm{HF}}||\). This is the only part of the procedure which requires reference data, and ensures that the initial wavefunction roughly resembles the true groundstate. While this step is in principle not required, it substantially improves the stability of the subsequent variational optimization.
2. **Variational optimization**: Minimization of the energy (Eq. 6) by drawing samples from the wavefunction using MCMC, and optimizing the parameters \(\theta\) of the ansatz using gradient based optimization.
3. **Evaluation**: Evaluation of the energy by evaluating Eq. 6 without updating the parameters \(\theta\), to obtain unbiased estimates of the energy.
To obtain a single wavefunction for a dataset of multiple geometries or compounds, only minimal changes are required. During supervised and variational optimization, for each gradient step we pick one geometry from the dataset. We pick geometries either in a round-robin fashion, or based on the last computed energy variance for that geometry. We run the Metropolis Hastings algorithm [30] for that geometry to draw electron positions \(\mathbf{r}\) and then evaluate energies and gradients. For each geometry we keep a distinct set of electron samples \(\mathbf{r}\).
### Obtaining orbital descriptors from Hartree-Fock
As discussed in Sec. 2.1, our ansatz effectively maps uncorrelated, low-accuracy Hartree-Fock orbitals, to correlated, high-accuracy neural network orbitals. The first step in this approach is to obtain orbital descriptors \(\mathbf{c}_{k}\) for each orbital \(k\), based on a Hartree-Fock calculation.
The Hartree-Fock method uses a single determinant as ansatz, composed of single-particle orbitals \(\phi_{k}\):
\[\psi^{\mathrm{HF}}(\mathbf{r}_{1},\ldots,\mathbf{r}_{n_{\mathrm{el}}})= \det\left[\Phi_{ik}^{\mathrm{HF}}\right]_{i,k=1\ldots n_{\mathrm{ el}}} \tag{7}\] \[\Phi_{ik}^{\mathrm{HF}}:= \phi_{k}^{\mathrm{HF}}(\mathbf{r}_{i}) \tag{8}\]
For molecules, these orbitals are typically expanded in atom-centered basis-functions \(\mu(\mathbf{r})\), with \(N_{\mathrm{basis}}\) functions centered on each atom \(I\):
\[\phi_{k}^{\mathrm{HF}}(\mathbf{r})=\sum_{I=1}^{N_{\mathrm{nuc}}}\sum_{b=1}^{N_{ \mathrm{basis}}}\alpha_{k,Ib}\;\mu_{b}(\mathbf{r}-\mathbf{R}_{I}), \tag{9}\]
The coefficients \(\mathbf{\alpha}_{k}\) and the corresponding orbitals \(\phi_{k}^{\mathrm{HF}}(\mathbf{r})\) are obtained as solutions of an
eigenvalue problem and are typically delocalized, i.e. they have non-zero contributions from many atoms. However, since \(\det[U\Phi]=\det[U]\det[\Phi]\), the wavefunction is invariant under linear combination of orbitals by a matrix \(U\) with \(\det[U]=1\). One can thus choose orbital expansion coefficients
\[\widetilde{\alpha}_{k,Ib}=\sum_{k^{\prime}=1}^{N_{\mathrm{orb}}}\alpha_{k,Ib}U_ {kk^{\prime}} \tag{10}\]
corresponding to orbitals which are maximally localized according to some metric. Several different metrics and corresponding localization schemes, such as Foster-Boys [31] or Pipek-Mezey [32], have been proposed and are easily available as post-processing options in quantum chemistry codes. We use the Foster-Boys method as implemented in pySCF [33].
Due to the fundamentally local nature of atom-wise orbital coefficients \(\widetilde{\boldsymbol{\alpha}}_{Ik}\), which can be insufficient to distinguish orbitals, we use a fully connected graph convolutional neural network (GCN) to add context about the surrounding atoms. We interpret each atom as a node (with node features \(\widetilde{\boldsymbol{\alpha}}_{Ik}\)) and the 3D inter-atomic distance vector \(\boldsymbol{R}_{IJ}\) as edge features:
\[\boldsymbol{c}_{k}=\mathrm{GCN}_{\theta}\left(\{\widetilde{\boldsymbol{\alpha }}_{Ik}\}_{I=1\ldots N_{\mathrm{nuc}}},\{\boldsymbol{R}_{IJ}\}_{I,J=1\ldots N _{\mathrm{nuc}}}\right)\]
We embed the edge features using a cartesian product of Gaussian basis functions of the distance \(R_{IJ}\) and the concatenation of the 3D-distance vector with the constant 1:
\[\widetilde{\boldsymbol{c}}_{IJ} =\exp\left(-\frac{(R_{IJ}-\mu_{n})^{2}}{2\sigma_{n}^{2}}\right) \otimes[1|\boldsymbol{R}_{IJ}]\] \[\boldsymbol{e}_{IJ}^{0} =\mathrm{MLP}(\widetilde{\boldsymbol{c}}_{IJ})\] \[\boldsymbol{c}_{I}^{0} =\boldsymbol{\alpha}_{I}\]
Each layer \(l\) of the GCN consist of the following update rules
\[\boldsymbol{u}_{Ik}^{l} =\sum_{J}\boldsymbol{c}_{Jk}^{l}\odot\left(\boldsymbol{W}_{g}^{ l}\boldsymbol{e}_{IJ}^{l}\right),\] \[\boldsymbol{c}_{Ik}^{l+1} =\sigma(\boldsymbol{W}_{v}^{l}\boldsymbol{c}_{Ik}+\boldsymbol{W}_ {u}^{l}\boldsymbol{u}_{Ik}^{l}),\]
with trainable weight matrices \(\boldsymbol{W}_{g}^{l}\), \(\boldsymbol{W}_{v}^{l}\), \(\boldsymbol{W}_{u}^{l}\) and the SiLU activation function \(\sigma\). After \(L\) iterations we use the final outputs as orbitals features:
\[\boldsymbol{c}_{Ik}:=\boldsymbol{c}_{Ik}^{L}\]
### Mapping orbital descriptors to wavefunctions
To obtain entry \(\Phi_{ik}\) of the Slater determinant, we combine a high-dimensional electron embedding \(\boldsymbol{h}_{i}\) with a function of the orbital descriptor \(\boldsymbol{c}_{Ik}\):
\[h_{i\nu}=h_{\theta}(\boldsymbol{r}_{i},\{\boldsymbol{r}\},\{ \boldsymbol{R}\},\{\boldsymbol{Z}\})_{\nu}\] \[\varphi_{\theta}^{d}(\boldsymbol{r}_{i},\boldsymbol{R}_{I}, \boldsymbol{c}_{Ik})=\exp\left(-|\boldsymbol{r}_{i}-\boldsymbol{R}_{I}|\,g_{ \theta}^{s}(\boldsymbol{c}_{Ik})_{d}\right)\] \[\Phi_{ik}^{d}=\sum_{I=1}^{N_{\mathrm{nuc}}}\varphi_{\theta}^{d}( \boldsymbol{r}_{i},\boldsymbol{R}_{I},\boldsymbol{c}_{Ik})\sum_{\nu=1}^{D_{ \mathrm{amb}}}f_{\theta}^{a}(\boldsymbol{c}_{Ik})_{d\nu}h_{i\nu}\]
The functions \(\mathrm{GCN}_{\theta}^{a}\), \(f_{\theta}^{a}\), and \(g_{\theta}^{s}\) are trainable functions, which are enforced to be (anti-)symmetric with respect to change in sign of their argument \(\boldsymbol{c}\):
\[\text{Symmetric }g_{\theta}^{s}\text{:}\] \[g_{\theta}^{s}(\boldsymbol{c}):=g_{\theta}(\boldsymbol{c})+g_{ \theta}(-\boldsymbol{c})\] \[\text{Antisymm. }f_{\theta}^{a}\text{:}\] \[f_{\theta}^{a}(\boldsymbol{c}):=f_{\theta}(\boldsymbol{c})-f_{ \theta}(-\boldsymbol{c})\] \[\text{Antisymm. }\mathrm{GCN}_{\theta}^{a}\text{:}\] \[\mathrm{GCN}_{\theta}^{a}(\boldsymbol{\alpha},\boldsymbol{R}):= \mathrm{GCN}_{\theta}(\boldsymbol{\alpha},\boldsymbol{R})-\mathrm{GCN}_{\theta }(-\boldsymbol{\alpha},\boldsymbol{R})\]
To obtain electron embeddings \(\boldsymbol{h}_{i}\) we use the message-passing architecture outlined in [5], which is invariant with respect to permutation of electrons of the same spin, or the permutation of ions.
\[\mathcal{G}=\{(\boldsymbol{R}_{I},Z_{I})\}_{I=1\ldots N_{\mathrm{ nuc}}}\] \[\mathcal{E}_{\uparrow}=\{\boldsymbol{r}_{i}\}_{i=1\ldots n_{ \uparrow}}\] \[\mathcal{E}_{\downarrow}=\{\boldsymbol{r}_{i}\}_{i=n_{\uparrow}+1 \ldots n_{\mathrm{el}}}\] \[\boldsymbol{h}_{i}=h_{\theta}^{\mathrm{embed}}(\boldsymbol{r}_{i },\mathcal{G},\mathcal{E}_{\uparrow},\mathcal{E}_{\downarrow})\]
Note that during training, all samples in a batch come from the same geometry, and thus have the same values for \(\mathbf{R}\), \(\mathbf{Z}\), and \(\widetilde{\mathbf{\alpha}}\). While the embedding network \(h^{\text{embed}}_{\theta}\), needs to be re-evaluated for every sample, the networks GCN\({}_{\theta}\), \(f_{\theta}\), and \(g_{\theta}\) only need to be evaluated once per batch, substantially reducing their impact on computational cost.
## Acknowledgements
We gratefully acknowledge financial support from the following grants: Austrian Science Fund FWF Project I 3403 (P.G.), WWTF-ICT19-041 (L.G.). The computational results have been achieved using the Vienna Scientific Cluster (VSC). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript. Additionally, we thank Nicholas Gao for providing his results and data, Ruard van Workum for initial work on the python implementation for multi-compound optimization and Jan Hermann for fruitful discussions.
## Author contributions
MS, LG, and PG conceived the overall idea. MS conceived and implemented the ansatz, built the dataset and designed the experiments. LG gave input on the ansatz and worked on implementation. MS and LG performed the experiments. MS and LG wrote the manuscript with input, supervision and funding from PG.
|
2310.07684 | Hypergraph Neural Networks through the Lens of Message Passing: A Common
Perspective to Homophily and Architecture Design | Most of the current hypergraph learning methodologies and benchmarking
datasets in the hypergraph realm are obtained by lifting procedures from their
graph analogs, leading to overshadowing specific characteristics of
hypergraphs. This paper attempts to confront some pending questions in that
regard: Q1 Can the concept of homophily play a crucial role in Hypergraph
Neural Networks (HNNs)? Q2 Is there room for improving current HNN
architectures by carefully addressing specific characteristics of higher-order
networks? Q3 Do existing datasets provide a meaningful benchmark for HNNs? To
address them, we first introduce a novel conceptualization of homophily in
higher-order networks based on a Message Passing (MP) scheme, unifying both the
analytical examination and the modeling of higher-order networks. Further, we
investigate some natural, yet mostly unexplored, strategies for processing
higher-order structures within HNNs such as keeping hyperedge-dependent node
representations, or performing node/hyperedge stochastic samplings, leading us
to the most general MP formulation up to date -MultiSet-, as well as to an
original architecture design, MultiSetMixer. Finally, we conduct an extensive
set of experiments that contextualize our proposals and successfully provide
insights about our inquiries. | Lev Telyatnikov, Maria Sofia Bucarelli, Guillermo Bernardez, Olga Zaghen, Simone Scardapane, Pietro Lio | 2023-10-11T17:35:20Z | http://arxiv.org/abs/2310.07684v2 | Hypergraph Neural Networks through the Lens of message passing: A Common Perspective to Homophily and Architecture Design
###### Abstract
Most of the current hypergraph learning methodologies and benchmarking datasets in the hypergraph realm are obtained by _lifting_ procedures from their graph analogs, simultaneously leading to overshadowing hypergraph network foundations. This paper attempts to confront some pending questions in that regard: Can the concept of homophily play a crucial role in Hypergraph Neural Networks (HGNNs), similar to its significance in graph-based research? Is there room for improving current hypergraph architectures and methodologies? (e.g. by carefully addressing the specific characteristics of higher-order networks) Do existing datasets provide a meaningful benchmark for HGNNs? Diving into the details, this paper proposes a novel conceptualization of homophily in higher-order networks based on a message passing scheme; this approach harmonizes the analytical frameworks of datasets and architectures, offering a unified perspective for exploring and interpreting complex, higher-order network structures and dynamics. Further, we propose MultiSet, a novel message passing framework that redefines HGNNs by allowing hyperedge-dependent node representations, as well as introduce a novel architecture -MultiSetMixer- that leverages a new hyperedge sampling strategy. Finally, we provide an extensive set of experiments that contextualize our proposals and lead to valuable insights in hypergraph representation learning.
## 1 Introduction
Hypergraph learning techniques have rapidly grown in recent years, demonstrating their effectiveness in processing higher-order interactions in numerous fields, spanning from recommender systems (Yu et al., 2021; Zheng et al., 2018; La Gatta et al., 2022), to bioinformatics (Zhang et al., 2018; Yadati et al., 2020; Klamt et al., 2009) and computer vision (Li et al., 2022; Xu et al., 2022; Gao et al., 2012; Yin et al., 2017; Kim et al., 2011). However, so far, the development of HyperGraph Neural Networks (HGNNs) has been largely influenced by the well-established Graph Neural Network (GNN) field. In fact, most of the current methodologies and benchmarking datasets in the hypergraph realm are obtained by _lifting_ procedures from their graph counterparts.
Drawing inspiration from graph-based models has significantly propelled the advancement of hypergraph research (Feng et al., 2019; Yadati et al., 2019; Chien et al., 2022), and it has simultaneously led to overshadowing hypergraph network foundations. We argue that it is now the time to address fundamental questions in order to pave the way for further innovative ideas in the field. In that regard, this study explores some of these open questions to understand better current HGNN architectures and benchmarking datasets. Can the concept of homophily play a crucial role in HGNNs, similar to its significance in graph-based research? Given that current HGNNs are predominantly extensions of GNN architectures adapted to the hypergraph domain, are these extended methodologies suitable, or should we explore new strategies tailored specifically for handling hypergraph-based data? Are the existing hypergraph benchmarking datasets truly _meaningful_ and representative enough to draw robust and valid conclusions?
To begin with, we explore how the concept of homophily can be characterized in complex, higher-order networks. Notably, there are many ways of characterizing homophily in hypergraphs -such as the distribution of node features, the analogous distribution of the labels, or the group connectivity similarity (as already discussed in (Veldt et al., 2023)). In particular, this work places the _node class distribution_ at the core of the analysis, and introduces a novel definition of homophily that relies on a message passing scheme. Interestingly, this enables us to analyze both hypergraph datasets and architecture designs from the same perspective. In fact, we reckon that this unified message passing framework has the potential to inspire the development of meaningful contributions for processing higher-order relationships more effectively.
Next, we study state-of-the-art HGNN architectures and introduce a new framework called MultiSet. We demonstrate that MultiSet generalizes most existing frameworks for HGNNs, including AllSet (Chien et al., 2022) and UniGCNII (Huang and Yang, 2021). Our framework presents an innovative approach to message passing, where multiple hyperedge-dependent representations of nodes are enabled. Then, we introduce novel methodologies to process hypergraphs -including MultiSetMixer, a new HGNN architecture based on a particular implementation of a MultiSet layer. In these implementations, we introduce a novel connectivity-based mini-batching strategy capable of processing large hyperedges and discuss the intriguing property of natural connectivity-based distribution shifts.
Last, but not least, we provide an extensive set of experiments that, driven by the general questions stated above, aim to gain a better understanding on fundamental aspects of hypergraph representation learning. In fact, the obtained results not only help us contextualize the proposals introduced in this work, but indeed offer valuable insights that might help improve future hypergraph approaches.
## 2 Related Works
Homophily in hypergraphs.Homophily measures are typically defined for graph models and consider only pairwise relationships. In the context of Graph Neural Networks (GNNs), many of the current models implicitly use the homophily assumption, which is shown to be crucial for achieving a robust performance with relational data (Zhou et al., 2020; Chien et al., 2020; Halcrow et al., 2020). Nevertheless, despite the pivotal role that homophily plays in graph representation learning, its hypergraph counterpart mainly remains unexplored. In fact, to the best of our knowledge, Veldt et al. (2023) is the only work that faces the challenge of defining homophily in higher-order networks. Veldt et al. (2023) introduces a framework in which hypergraphs are used to quantify homophily from group interactions; however, the definition of homophily is restricted to uniform hypergraphs -i.e. where all hyperedges have exactly the same size (more details in Section 3). This represents a hard assumption that complicates its applicability to most of the current hypergraph datasets.
Hypergraph Neural Networks.The work of Chien et al. (2022) introduced AllSet, a general framework to describe HGNNs through a two-step message passing based mechanism, and demonstrated that most of the current hypergraph models are special instances of their formulation, based on the composition of two learnable permutation invariant functions that transmit information from nodes to hyperedges, and back from hyperedges to nodes. In particular, AllSet can be seen as a generalization of the most commonly used HGNNs, including all clique expansion based (CE) methods, HGNN (Feng et al., 2019), HNHN (Dong et al., 2020), HCHA (Bai et al., 2021), HyperSAGE (Arya et al., 2020) and HyperGCN(Yadati et al., 2019). Chien et al. (2022) also proposes two novel AllSet-like learnable layers: the first one -AllDeepSet- exploits Deep Set (Zaheer et al., 2017), and the second one -AllSetTransformer- Set Transformer (Lee et al., 2019), both of them achieving state-of-the-art results in the most common hypergraph benchmarking datasets. Concurrent to AllSet, the work of Huang and Yang (2021) also aimed at designing a common framework for graph and hypergraph NNs, and its more advanced UniGCNII method leverages initial residual connections and identity mappings in the hyperedge-to-node propagation to address over-smoothing issues; notably, UniGCNII do not fall under AllSet notation due to these residual connections. With Chien et al. (2022) and Huang and Yang (2021) being the most relevant ones to our work, we extend this review in Appendix A.
**Notation.** A hypergraph is an ordered pair of sets \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\) is the set of nodes and \(\mathcal{E}\) is the set of hyperedges. Each hyperedge \(e\in\mathcal{E}\) is a subset of \(\mathcal{V}\), i.e., \(e\subseteq\mathcal{V}\). A hypergraph is a generalization of the concept of a graph where (hyper)edges can connect more than two nodes. A vertex \(v\) and a hyperedge \(e\) are said to be incident if \(v\in e\). For each node \(v\), we denote its class by \(y_{v}\), and by \(\mathcal{E}_{v}=\{e\in\mathcal{E}:v\in e\}\) the subset of hyperedges in which it is contained, with \(d_{v}=|\mathcal{E}_{v}|\) depicting the node degree. The set of classes of the hypergraph is represented by \(\mathcal{C}=\{c_{i}\}_{i=1}^{|\mathcal{C}|}\).
## 3 Defining and measuring homophily in hypergraphs
Homophily is a graph property that describes the tendency for edges to connect nodes that are similar (Moody, 2001; Shrum et al., 1988; Verbrugge, 1983). Currently, the most common measure of graph homophily is the proportion of edges that connect nodes of the same class. In pairwise relationships, a high degree of network homophily tends to create a network with communities, based on nodes' class, that are highly connected within each other and poorly connected to the outside. Extending the concept of homophily to higher-order interactions is not straightforward, but it becomes crucial in order to avoid discarding valuable information about the composition of groups in which individuals participate. In this Section, we recap the general notion of higher-order homophily for \(k\)-uniform hypergraphs introduced in (Veldt et al., 2023) and present a novel propagation-based homophily measure which is applicable for general, non-uniform hypergraphs. In essence, the score proposed in Veldt et al. (2023) tends to primarily assess the composition of hyperedges within the graph by quantifying the distribution of classes among hyperedges. In contrast, our definition places a greater emphasis on capturing the interconnections between different hyperedges by the exchange of information between nodes following the message passing scheme.
\(k\)-uniform HomophilyVeldt et al. (2023) defines general higher-order homophily for \(k\)-uniform hypergraphs \(G_{k}=(\mathcal{V},\mathcal{E}_{k})\) which we refer to as _\(k\)-uniform homophily_. The type \(t\)-affinity score for each \(t\in\{1,\ldots,k\}\), indicates the likelihood of a node belonging to class \(c\) participating in hyperedges in which exactly \(t\) group members belong to class \(c\). The authors introduce a _baseline score_ that measures the probability that a class-\(c\) node is in a hyperedge where \(t\) members are from class \(c\), given that the other \(k-1\) nodes were chosen uniformly at random. The \(k\)-uniform hypergraph homophily measure can be expressed as a ratio of affinity and baseline scores, with a ratio value of 1 indicating that the group is formed uniformly at random, while any other number indicates that group interactions are either overexpressed or underexpressed for class \(c\). Note that non-uniform hypergraphs cannot directly be evaluated with \(k\)-uniform homophily scores; instead, the corresponding initial hyperedge set \(\mathcal{E}\) has to be restricted to particular \(k\)-uniform hyperedges, and for each \(k\) values they are processed separately. The detailed formulation of homophily measures and corresponding plots for each dataset can be found in Appendix J.
Message Passing HomophilyWe present a novel two-step message passing homophily measure that, unlike the one proposed by Veldt et al. (2023), does not assume a \(k\)-uniform hypergraph structure. Furthermore, the proposed measure enables the definition of a score for each node and hyperedge for any neighborhood resolution, i.e., the connectivity of the hypergraph can be explicitly investigated. Our homophily definition follows the two-step message passing mechanism starting from the hyperedges of the hypergraph. Thus, given an edge \(e\), we define the 0-level hyperedge homophily \(h_{e}^{0}(c)\) as the fraction of nodes within each hyperedge that belong to class \(c\), i.e.
\[h_{e}^{0}(c)=\frac{1}{|e|}\sum_{v\in e}\mathds{1}_{y_{v}=c}. \tag{1}\]
This score describes how homophilic the initial connectivity is with respect to class \(c\). By computing the score for every class \(c_{i}\in\mathcal{C}\) we obtain a categorical distribution for each hyperedge \(e\in\mathcal{E}\), i.e. \(h_{e}^{0}=(h_{e}^{0}(c_{0}),\ldots,h_{e}^{0}(c_{|\mathcal{C}|}))\). We can then use this 0-level homophily information as a starting point to calculate higher-level homophily measurements for both nodes and hyperedges through the two-step message passing approach. Formally, we define the \(t\)-level homophily score as
\[h_{v}^{t}(y_{v})=\texttt{AGG}_{\mathcal{E}}\left(\{h_{e}^{t-1}(y_{v})\}_{e\in \mathcal{E}_{v}}\right), \tag{2}\]
where \(\texttt{AGG}_{\mathcal{E}}\) and \(\texttt{AGG}_{\mathcal{V}}\) are functions that aggregate edge and node homophily scores, respectively. In our implementation, we considered the mean operation for both aggregations.
Qualitative AnalysisIn this paragraph, we are taking a closer look at the qualitative analysis of the node homophily measure we introduced. One of the most straightforward ways to make use of the message passing homophily measure is to visualize how the node homophily score, as described in Eq. 2, changes dynamically. We've depicted this process in Figure 1, focusing on the CORA-CA and 20NewsGroup datasets. Please note that in the figure, we are only showing non-isolated nodes. Looking at Figure 1 (a), we can observe several notable trends. First, in the initial node distribution (\(t=0\)), every class, except class 6, has a significant number of fully homophilic nodes. As we move to the 1-hop neighborhood (\(t=1\)), the corresponding classes either exhibit a moderate decrease in homophily or show no decrease at all. It's worth noting that at \(t=0,1,\) and \(10\), class 2 maintains a stable homophily distribution, hinting at an isolated subnetwork within. Furthermore, at \(t=10\), some points still maintain a node homophily score of 1, indicating the presence of multiple small subnetworks. Class 6 consistently displays the lowest average homophily measure at every step, with an average score of approximately 38% at \(t=10\). The node homophily distribution for the 20Newsgroups dataset is visualized in Figure 1 (b). At time step \(t=0\), we observe a wide range of homophily scores from 0 to 1 for each class. This suggests that the network is highly irregular with respect to connectivity. Moving to time step \(t=1\), there is a significant decrease in the homophily scores for every class, indicating a high degree of heterophily within the 1-hop neighborhood, which is not surprising considering step zero node homophily distribution. Finally, at time step \(t=10\), we can observe that all the classes converge to approximately the same homophily values within each class. This convergence suggests that the network is highly interconnected. More insights regarding node homophily measure and related HGNNs performances are described in Section 5 while the rest of the plots for the datasets can be found in Appendix I.
## 4 Methods
Current HGNNs aim to generalize GNN concepts to the hypergraph domain, and are specially focused on redefining graph-based propagation rules to accommodate higher-order structures. In this regard, the work of Chien et al. (2022) introduced a general notation framework, called AllSet, that encompasses most of the currently available HGNN layers, including CEGCN/CEGAT, HGNN (Feng et al., 2019), HNHN (Dong et al., 2020), HCHA (Bai et al., 2021), HyperGCN (Yadati et al., 2019), and the AllDeepSet and AllSetTransformer presented in the same work (Chien et al., 2022).
The first part of this Section revisits the original AllSet formulation. Then, we introduce a new framework -termed MultiSet- which extends AllSet by allowing multiple hyperedge-dependent
Figure 1: Node Homophily Distribution Scores for CORA-CA (a) and 20Newsgroups (b) using Equation 2 at \(t=0,1,\) and \(10\) (left, middle, and right plots correspondingly). Horizontal lines depict class mean homophily, with numbers above indicating the number of visualized points per class.
representations of nodes. Finally, we present some novel methodologies to process hypergraphs -including MultiSetMixer, a new HGNN architecture within the MultiSet framework.
### AllSet Propagation Setting
For a given node \(v\in\mathcal{V}\) and hyperedge \(e\in\mathcal{E}\) in a hypergraph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), let \(\mathbf{x}_{v}^{(t)}\in\mathbb{R}^{f}\) and \(\mathbf{z}_{e}^{(t)}\in\mathbb{R}^{d}\) denote their vector representations at propagation step \(t\). We say that a function \(f\) is a multiset function if it is permutation invariant w.r.t. each of its arguments in turn. Typically, \(\mathbf{x}_{v}^{(0)}\) and \(\mathbf{z}_{e}^{(0)}\) are initialized based on the corresponding node and hyperedge original features, if available. The vectors \(\mathbf{x}_{v}^{(0)}\) and \(\mathbf{z}_{e}^{(0)}\) represent the initial node and hyperedge features, respectively. In this context, the AllSet framework (Chien et al., 2022) consists in the following two-step update rule:
\[\mathbf{z}_{e}^{(t+1)}=f_{\mathcal{V}\to\mathcal{E}}(\{\mathbf{x}_{u}^{(t)}\}_{u:u\in \mathcal{E}};\mathbf{z}_{e}^{(t)}), \tag{4}\]
\[\mathbf{x}_{v}^{(t+1)}=f_{\mathcal{E}\to\mathcal{V}}(\{\mathbf{z}_{e}^{(t+1)}\}_{e\in \mathcal{E}_{v}};\mathbf{x}_{v}^{(t)}), \tag{5}\]
where \(f_{\mathcal{V}\to\mathcal{E}}\) and \(f_{\mathcal{E}\to\mathcal{V}}\) are two permutation invariant functions with respect to their first input. Equations 4 and 5 describe the propagation from nodes to hyperedges and from hyperedges to nodes, respectively. We extend the original AllSet formulation to accommodate UniGCNII (Huang and Yang, 2021), a concurrent work to AllSet, by modifying the node update rule (Eq. 5) in order to allow residual connections, i.e.:
\[\mathbf{x}_{v}^{(t+1)}=f_{\mathcal{E}\to\mathcal{V}}(\{\mathbf{z}_{e}^{(t+1)}\}_{e\in \mathcal{E}_{v}};\{\mathbf{x}_{v}^{(k)}\}_{k=0}^{t}). \tag{6}\]
There is no requirement for the function to be permutation invariant with respect to this second set.
**Proposition 1**.: _UniGCNII Huang and Yang, 2021 is a special case of AllSet considering 4 and 6._
In the practical implementation of a model, \(f_{\mathcal{V}\to\mathcal{E}}\) and \(f_{\mathcal{E}\to\mathcal{V}}\) are parametrized and learnt for each dataset and task, and particular choices of these functions give rise to the different HGNN layer architectures considered in this paper; more details in Appendix B.
### MultiSet Framework
In this Section, we introduce our proposed MultiSet framework, which can be seen as an extension of AllSet where nodes can have multiple co-existing hyperedge-based representations. For a given hyperedge \(e\in\mathcal{E}\) in a hypergraph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), we denote by \(\mathbf{z}_{e}^{(t)}\in\mathbb{R}^{d}\) its vector representation at step \(t\). However, for a node \(v\in\mathcal{V}\), MultiSet allows for as many representations of the node as the number of hyperedges it belongs to. We denote by \(\mathbf{x}_{v,e}^{(t)}\in\mathbb{R}^{f}\) the vector representation of node \(v\) in a hyperedge \(e\in\mathcal{E}_{v}\) at propagation time \(t\), and by \(\mathbb{X}_{v}^{(t)}=\{\mathbf{x}_{v,e}^{(t)}\}_{e\in\mathcal{E}_{v}}\), the set of all \(d_{v}\) hidden states of that node in the specified time-step. Accordingly, the hyperedge and node update rules of Multiset are formulated to accommodate hyperedge-dependent node representations:
\[\mathbf{z}_{e}^{(t+1)}=f_{\mathcal{V}\to\mathcal{E}}(\{\mathbb{X}_{u}^{(t)}\}_{u:u \in e};\mathbf{z}_{e}^{(t)}), \tag{7}\]
\[\mathbf{x}_{v,e}^{(t+1)}=f_{\mathcal{E}\to\mathcal{V}}(\{\mathbf{z}_{e}^{(t+1)}\}_{e \in\mathcal{E}_{v}};\{\mathbb{X}_{v}^{(k)}\}_{k=0}^{t}), \tag{8}\]
where \(f_{\mathcal{V}\to\mathcal{E}}\) and \(f_{\mathcal{E}\to\mathcal{V}}\) are two multiset functions with respect to their first input. After \(T\) iterations of message passing, MultiSet also considers a last readout-based step with the idea of obtaining a unique final representation \(x_{v}^{T}\in\mathbb{R}^{f^{\prime}}\) for each node from the set of its hyperedge-based representations:
\[\mathbf{x}_{v}^{(T)}=f_{\mathcal{V}\to\mathcal{V}}(\{\mathbb{X}_{v}^{(k)}\}_{k=0} ^{T}) \tag{9}\]
where \(f_{\mathcal{V}\to\mathcal{V}}\) is also a multiset function.
**Proposition 2**.: _AllSet 4-5, as well as its extension 4-6, are special cases of MultiSet 7-8-9._
Figure 3: MultiSet layout
Figure 2: AllSet layout
### Training MultiSet networks
This Section describes the main characteristics of our MultiSet layer implementation, termed MultiSetMixer, and presents a novel sampling procedure that our model incorporates.
Learning MultiSet LayersFollowing the mixer-style block designs (Tolstikhin et al., 2021) and standard practice, we propose the following MultiSet layer implementation for HGNNs:
\[\mathbf{z}_{e}^{(t+1)}=f_{\mathcal{V}\rightarrow\mathcal{E}}(\{\mathbf{x}_{u,e}^{(t)} \}_{w:u\in e};\mathbf{z}_{e}^{(t)}):=\frac{1}{|e|}\sum_{v\in e}\mathbf{x}_{u,e}^{(t)}+ \text{MLP}\left(\text{LN}\left(\frac{1}{|e|}\sum_{v\in e}\mathbf{x}_{u,e}^{(t)} \right)\right), \tag{10}\]
\[\mathbf{x}_{v,e}^{(t+1)}=f_{\mathcal{E}\rightarrow\mathcal{V}}(\mathbf{z}_{e}^{(t+1)} ;\mathbf{x}_{v,e}^{(t)}):=\mathbf{x}_{v,e}^{(t)}+\text{MLP}\left(\text{LN}(\mathbf{x}_{v,e }^{(t)})\right)+\mathbf{z}_{e}^{(t+1)}, \tag{11}\]
\[\mathbf{x}_{v}^{(T)}=f_{\mathcal{V}\rightarrow\mathcal{V}}(\mathbb{X}_{v}^{(T)}): =\frac{1}{d_{v}}\sum_{e\in\mathcal{E}_{v}}\mathbf{x}_{v,e}^{(t)} \tag{12}\]
where MLPs are composed of two fully-connected layers, and LN stands for layer normalisation. This novel architecture, which we call MultiSetMixer, is based on a mixer-based pooling operation for _(i)_ updating hyperedges from its node's representations, and _(ii)_ generate and update hyperedge-dependent representations of the nodes.
**Proposition 3**.: _The functions \(f_{\mathcal{V}\rightarrow\mathcal{E}}\), \(f_{\mathcal{E}\rightarrow\mathcal{V}}\) and \(f_{\mathcal{V}\rightarrow\mathcal{V}}\) defined in MultiSetMixer are permutation invariant. Furthermore, these functions are universal approximators of multiset functions when the size of the input multiset is finite._
Mini-batchingThe motivation for introducing a new strategy to iterate over hypergraph datasets is twofold. On the one hand, current HGNN pipelines suffer from scalability issues to process large datasets and very large hyperedges. On the other, pooling operations over relatively large sets can also lead to over-squashing the signal. To help in these directions, we propose sampling \(X\) mini-batches of a certain size \(B\) at each iteration. At _step 1_, it samples \(B\) hyperedges from \(\mathcal{E}\). The hyperedge sampling over \(\mathcal{E}\) can be either uniform or weighted (e.g. by taking into account hyperedge cardinalities). Then in _step 2_\(L\) nodes are in turn sampled from each sampled hyperedge \(e\), padding the hyperedge with \(L-|e|\) special padding tokens if \(|e|>L\). Overall, the shape of the obtained mini-batch \(X\) has fixed size \(B\times L\). See Appendix H for additional analysis of the sampling procedure.
## 5 Experimental Results
The questions that we introduced in the Introduction have shaped our research, leading to a new definition of higher-order homophily and novel architectural designs and sampling strategies that can potentially fit better the properties of hypergraph networks. In subsequent subsections, we set again three main questions that follow up from these fundamental inquiries and can help contextualize the technical contributions introduced in this paper.
Dataset and ModelsWe use the same datasets used in Chien et al. (2022), which includes Cora, Citeseer, Pubmed, ModelNet40, NTU2012, 20Newsgroups, Mushrorom, ZOO, CORA-CA, and DBLP-CA. More information about datasets and corresponding statistics can be found in Appendix F.2. We also utilize the benchmark implementation provided by Chien et al. (2022) to conduct the experiments with several models, including AllDeepSets, AllSetTransformer, UniGCNII, CEGAT, CEGCN, HCHA, HGNN, HNPN, HyperGCN, HAN, and HAN (mini-batching). Additionally, we consider vanilla MLP applied to node features and a transformer architecture and introduce three new models: MultiSetMixer, MLP Connectivity Batching (MLP CB), and Multiple MLP CB (MMLP CB). The MLP CB and MMLP CB models use connectivity information to form and process batches. Specifically, the MMLP CB model processes the top three most frequent connectivities using separate MLP encoders, while the fourth encoder is used to process the remaining connectivities. We refer to Section 4.3 for further details about all these architectures. All models are optimized using 15 splits with 2 model initializations, resulting in a total of 30 runs; see Appendix F.1 for further details.
### How does MultiSetMixer perform?
Our first experiment aims to assess the performance of our proposed model, MultiSetMixer, as well as the two introduced baselines, MLP CB and MMLP CB. Figure 4 shows the average rankings -across all models and datasets- of the top-3 best performing models for the considered training splits, exhibiting that those splits can impact the relative performance among models.
However, due to space limitations, we restrict our analysis to the \(50\%\) split results shown in Table 1,1 and relegate to Appendix G.1 the corresponding tables for the other scenarios. Table 1 emphasizes the MultiSetMixer model's relatively solid performance, being the best-performing model on the NTU2012, ModelNet40, and 20Newsgroups datasets. Its performance on the 20Newsgroups dataset is especially noteworthy, significantly outperforming the other models. Moreover, it is notable that MLP CB and MMLP CB exhibit similar behaviour on this dataset. In contrast, the performance of all other models achieves roughly the same performance as the MLP. This observation suggests that these models can not account for dataset connectivity; in particular, as we demonstrated in Section 3, the dispersion of the node homophily measure, with a subsequent convergence to a similar value within each class, indicates that the dataset's connectivity is notably non-homophilic and presents a challenge. In contrast, CORA-CA exhibits a high degree of homophily within its hyperedges and shows the most significant performance gap between the best-performing model, AllSetTransformer, and the basic MLP. A similar trend is observed for DBLP-CA (see node homophily plot in Appendix I). Please refer to Section 5.3 for additional experiments analyzing the impact of connectivity on the models.
Footnote 1: Unless otherwise specified, all tables in the main body of the paper use a \(50\%/25\%/25\%\) split between training and testing. The results are shown as Mean Accuracy Standard Deviation, with the best result highlighted in bold and shaded in grey, and results within one standard deviation of the best result are displayed in blue-shaded boxes.
On the other hand, we can notice that CEGAT, CEGON and our proposed model don't perform well on the Mushroom dataset. This is noteworthy because the Mushroom dataset's features are highly representative, as demonstrated by the near-perfect performance of the MLP classifier. This suggests that, in this particular case, connectivity may not play a crucial role in achieving high performance.
### What is the impact of the introduced mini-batch sampling strategy?
Next, we examine the role that our proposed mini-batching sampling can play _(i)_ in explaining previous results and _(ii)_ in influencing other model's performance.
Class distribution analysisTo evaluate and motivate the potential of the proposed mini-batching sampling, we investigate the reason behind both the superior performance of MultiSetMixer, MLP CB and MMLP CB on 20NewsGroup and their poor performance on Mushroom. Framing mini-batching from the connectivity perspective presents a nuanced challenge that conceals significant potential for improvement (Teney et al., 2023). It is important to note that connectivity, by definition,
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline \hline Model & Case & Case & Case & Case & Case & Case & Case & Case & Case & Case & Case \\ \hline Mushroom & 77.1 & 77.1 & 77.2 & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** \\ \hline Mushroom & 77.1 & 77.1 & 77.2 & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** \\ \hline Mushroom & 77.1 & 77.
describes relationships among the nodes, implying that some parts of the dataset might interconnect much more densely, creating some sort of hubs within the network. Thus, mini-batching might introduce unexpected skew in training distribution. In particular, in Figure 5, we depict the class distribution of the original dataset, referred to as _Node_, while _'Step 1 and 2'_ and _'Step 1'_ shows the distribution after each step in our mini-batching procedure. The sampling procedure tends to rebalance class distributions in certain cases, such as the 20NewsGroup dataset, while in contrast, it introduces an imbalance that was not present in the original labels in the Mushroom dataset, where our model demonstrated suboptimal performance. This observation leads to the hypothesis that, in some cases, the sampling procedure produces a shift distribution that rebalances the class distributions and conducts our model to outperform the comparison models.
Application to Other ModelsFurthermore, we explore the proposed mini-batch sampling procedure with the AllSetTransformer and UniGCNII models by implementing Step 1 of the mini-batch procedure without additional hyperparameter optimization. From Table 2, we can observe a drop in performance for most of the datasets both for AllSetTransformer and for UniGCNII; both models, on average, outperform the HAN (mini-batching) model. This suggests the substantial potential of the proposed sampling procedure. More in detail, AllSetTransformer has a substantial decrease in accuracy for the CORA-CA dataset, in contrast to the UniGCNII, which registers only marginal decreases. A parallel pattern emerges with the DBLP-CA dataset.
### How do connectivity changes affect performance?
To shine a light on this, we design two different experimental approaches aiming at modifying the original connectivity of datasets in a systematic manner. The first experiment tests the performance when some hyperedges are removed following different _drop connectivity_ strategies. Then, a second experiment examines the model's performance with the introduction of two preprocessing strategies applied to the given hypergraph connectivity.
Reducing ConnectivityThis experiment aims to investigate the significance of connectivity in datasets and the extent to which it influences the performances of the models. We divide this experiment into two parts: (i) drop connectivity and (ii) connectivity rewiring. In the first part of the experiment, we employ three strategies to introduce variations in the initial dataset's connectivity. The first two strategies involve ordering hyperedges based on their lengths in **ascending order**. In the first approach, referred to as _trimming_, we remove the initial \(x\%\) of ordered hyperedges. The second approach, referred to as _retention_, involves keeping the first \(x\%\) of hyperedges and discarding the remaining \(100-x\%\). Finally, the last strategy involves randomly dropping \(x\%\) of hyperedges from the dataset, referred to as _random drop_. Results shown in Table 3 also indicate that connectivity minimally impacts CEGCN, and AllSetTransformer for the Citeseer and Pubmed datasets. On the other hand, MultiSetMixer performs better at the _trimming 25%_ setting, although the achieved performance is on par with MLP reported in Table 1. This suggests that the proposed model was negatively affected by the distribution shift. Conversely, we observe a similar but opposite trend for the Mushroom dataset, where MultiSetMixer's performance improves due to the reduced impact of the distribution shift. Another interesting observation is that the CEGCN model gains improvement in 6 out of 9 datasets, with a doubled increase for the ZOO dataset. In the case of Cora, CORA-CA, and DBLP-CA datasets, another interesting pattern emerges: retaining only 25% of the highest relationships (_retention 25%_) consistently results in better performance compared to retaining 50% or 75%. This is intriguing because, at the 25% level, we are preserving only a small fraction of the higher-order relationships. The opposite pattern holds for the _trimming_ strategy. For the
\begin{table}
\begin{tabular}{c|c c c c c c c c c c c c} \hline Model & \multicolumn{2}{c|}{CIM} & \multicolumn{2}{c|}{CIM} & \multicolumn{2}{c|}{CIM} & \multicolumn{2}{c|}{CIM} & \multicolumn{2}{c|}{CIM} & \multicolumn{2}{c|}{CIM} & \multicolumn{2}{c|}{CIM} & \multicolumn{2}{c|}{CIM} & \multicolumn{2}{c|}{CIM} & \multicolumn{2}{c|}{CIM} \\ \hline AllSetTransformer & Mobile & 71.34 & 1.08 & 69.51 & 1.46 & **0.55** & **0.55** & **0.55** & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 \\ UniGCNI (DBLP) & **0.55** & **0.55** & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 \\ UniGCNI (DBLP) & **0.55** & **0.55** & **0.55** & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 \\ UniGCNI (DBLP) & **0.55** & **0.55** & **0.55** & **0.55** & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 \\ UniGCNI (DBLP) & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & 0.55 & 0.55 & 0.55 \\ UniGCNI & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** \\ \hline \end{tabular}
\end{table}
Table 2: Mini-batching experiment. Test accuracy in % averaged over 15 splits
datasets mentioned above, this phenomenon remained consistent across all models. Notice that this phenomenon doesn't appear when we remove hyperedges randomly, in this case, as expected, the more hyperedges we remove, the more the performances decrease.
Rewiring ConnectivityIn this experiment, we preserve the original connectivity and investigate the influence of homophilic hyperedges on performance. To do so, we adjust the given connectivity in two different ways. The first strategy aims to unveil the full potential of homophily for each dataset by dividing the given hyperedges into fully homophilic ones based on the _node labels_. In contrast, the second strategy explores the possibility of splitting hyperedges based on their _initial node features_. More in detail, the hyperedge division results from applying multiple times \(k\)-means for each hyperedge \(e\), varying at each iteration the number of centroids \(m\) from \(2\) to \(\min(C,|e|)\); the elbow method is then used to determine the optimal hyperedge partitioning. It's not surprising that the "Label Based" strategy improves the performance for all datasets and models, as evident from Table 4. However, it's worth highlighting that the graph-based method CEGCN achieves results similar to HGNNs in this strategy. Additionally, only CEGCN, on average, performs better with the "k-means" strategy. These observations collectively suggest that connectivity preprocessing plays a crucial role, particularly for graph-based models. Applying "k-means" diminishes the distribution shift for MultiSetMixer.
## 6 Discussion
This section summarizes some key findings from our extensive evaluation and proposed homophily measure. Firstly, we showed that the proposed message passing formalization of the homophily measure enables the discovery of patterns and provides valuable insights into the dynamics of hypernetworks. Importantly, this approach can be extended to other definitions of homophily beyond labels. Furthermore, we showed that our MultiSetMixer model outperforms existing architectures in several scenarios. We also identified some common failure modes, which we attribute to the distribution shift introduced by the proposed mini-batching sampling scheme and the way message-passing propagates information. The experimental results demonstrate that certain benchmark datasets (Citeseer, Pubmed, 20Newsgroups) for hypergraph learning contain connectivity patterns that are not
\begin{table}
\begin{tabular}{c|c|c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & Topic & \multicolumn{1}{c|}{City} & \multicolumn{1}{c|}{Combined} & \multicolumn{1}{c|}{Combined} & \multicolumn{1}{c|}{Combined} & \multicolumn{1}{c|}{Combined} & \multicolumn{1}{c|}{Combined} & \multicolumn{1}{c|}{Combined} & \multicolumn{1}{c|}{Combined} & \multicolumn{1}{c|}{Combined} & \multicolumn{1}{c}{Combined} & \multicolumn{1}{c}{Combined} \\ \cline{2-13} & \(\Delta\)LeLeNet & **0.20\(\pm\)0.03** & **0.00\(\pm\)0.03** & **0.00\(\pm\)0.03** & **0.00\(\pm\)0.03** & **0.00\(\pm\)0.03** & **0.00\(\pm\)0.03** & **0.00\(\pm\)0.03** & **0.00\(\pm\)0.03** & **0.00\(\pm\)0.03** & **0.00\(\pm\)0.03** & **0.00\(\pm\)0.03** & **0.00\(\pm\)0.03** & **0.00\(\pm\)0.03** & **0.00\(\pm\)0.03** \\ \hline \multirow{2}{*}{AllGraphGraph} & \(\Delta\)LeNet & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** \\ \hline \multirow{2}{*}{AllGraphGraph} & \(\Delta\)LeNet & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** \\ \hline \multirow{2}{*}{AllGraph} & \(\Delta\)LeNet & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** \\ \hline \multirow{2}{*}{AllGraph} & \(\Delta\)LeNet & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** \\ \hline \multirow{2}{*}{AllGraph} & \(\Delta\)LeNet & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** \\ \hline \multirow{2}{*}{AllGraph} & \(\Delta\)LeNet & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.
effectively captured by HGNNs and are often overlooked. We show that the substantial performance gap between HGNNs and MLP for Cora, CORA-CA, and DBLP-CA is primarily attributed to patterns within a particular subset of connectivity, i.e., the largest hyperedge cardinalities have a stronger influence on performance. Finally, there is compelling evidence that connectivity can serve purposes beyond message propagation, such as acting as a tool for intentional distribution shift with mini-batching. We believe that the provided set of experiments and dynamic homophily figures are valuable tools to shape novel ideas for the hypergraph modeling research field.
## 7 Reproducibility
We include all the details about our experimental setting, including the choice of hyperparameters, the specifications of our machine and environment, the training/validation/test split, in Appendix F.1 and in Section 5. To ensure the reproducibility of our results, we will provide the source code along with the camera-ready version.
|
2307.02327 | Equivariant graph neural network interatomic potential for Green-Kubo
thermal conductivity in phase change materials | Thermal conductivity is a fundamental material property that plays an
essential role in technology, but its accurate evaluation presents a challenge
for theory. In this work, we demonstrate the application of $E(3)$-equivariant
neutral network interatomic potentials within Green-Kubo formalism to determine
the lattice thermal conductivity in amorphous and crystalline materials. We
apply this method to study the thermal conductivity of germanium telluride
(GeTe) as a prototypical phase change material. A single deep learning
interatomic potential is able to describe the phase transitions between the
amorphous, rhombohedral and cubic phases, with critical temperatures in good
agreement with experiments. Furthermore, this approach accurately captures the
pronounced anharmonicity that is present in GeTe, enabling precise calculations
of the thermal conductivity. In contrast, the Boltzmann transport equation
including only three-phonon processes tends to overestimate the thermal
conductivity by approximately a factor of 2 in the crystalline phases. | Sung-Ho Lee, Jing Li, Valerio Olevano, Benoit Sklénard | 2023-07-05T14:37:34Z | http://arxiv.org/abs/2307.02327v2 | # Equivariant graph neural network interatomic potential for
###### Abstract
Thermal conductivity is a fundamental material property that plays an essential role in technology, but its accurate evaluation presents a challenge for theory. In this letter, we demonstrate the application of E(3)-equivariant neutral network interatomic potentials within Green-Kubo formalism to determine the lattice thermal conductivity in amorphous and crystalline materials. We apply this method to study the thermal conductivity of germanium telluride (GeTe) as a prototypical phase change material. A single deep learning interatomic potential is able to describe the phase transitions between the amorphous, rhombohedral and cubic phases, with critical temperatures in good agreement with experiments. Furthermore, this approach accurately captures the pronounced anharmonicity present in GeTe, enabling precise calculations of thermal conductivity. In contrast, the Boltzmann transport equation tends to overestimate it by approximately a factor of two in the crystalline phases.
Thermal conductivity is an intrinsic material property with deep implications in technology since it determines thermal management in the design of electronic devices [1; 2], and specifies the figure of merit in thermoelectric devices [3; 4]. Lattice vibrations, i.e. phonons, dominate heat transport in semiconductors and insulators. Much effort has been devoted to accurate calculations of lattice thermal conductivities from a microscopic perspective. The Boltzmann transport equation (BTE) [5; 6; 7], non-equilibrium Green function (NEGF) theory [8; 9; 10], and the Green-Kubo formula (GK) [11; 12; 13] are the three major approaches to lattice thermal conductivity calculations. BTE evaluates the response of phonon occupation to a temperature gradient, typically including three-phonon scattering processes, which limits its application to weakly anharmonic crystalline materials. NEGF treats phonons quantum mechanically and takes into account contact-channel interface scatterings and phonon anharmonicity by self-energies. However, it is computationally expensive [8]. GK provides the lattice thermal conductivity from the heat flux in an equilibrium molecular dynamics (MD) simulation, accounting for anharmonic effects to all orders [14]. Furthermore, recent developments extend GK to low temperatures [12; 13], which makes it a robust approach for a wide range of temperatures and materials. GK theory provides a unified approach to compute the lattice thermal conductivity in ordered and disordered solids. For harmonic amorphous systems, thermal transport can be described by the Allen and Feldman (AF) theory [15]. However, it has been shown that AF theory may be inadequate when anharmonic effects become important [16; 17].
The MD simulation in the GK approach requires a relatively long simulation time (up to a few nanoseconds) for adequate statistical sampling and an accurate description of interactions among atoms. Such long simulation times are affordable for MD with empirical force fields, but at the price of reduced accuracy and universality. _Ab initio_ MD has better accuracy but is too computationally expensive for large systems or long MD simulations. Extrapolation schemes have been proposed [18] to reduce the computational cost, but they are unsuitable for disordered solids.
In recent years, machine learning (ML) has emerged as a viable alternative for tasks that _ab initio_ methods have faced challenges with. In particular, machine learning interatomic potentials (MLIP) have been successful in predicting energies, forces and stress tensors orders of magnitude faster than first-principle methods, while retaining their accuracy. Thermal transport GK calculations have been reported with MLIP's relying on descriptor-based approaches, such as Behler-Parrinello neural networks (NN) or kernel-based methods [19; 20; 21]. Graph NN (GNN) interatomic potentials based on message passing architectures (MPNN) [22; 23; 24; 25] have been proposed as an alternative to hand-crafted descriptors, whereby structures are encoded as a graph with atoms represented as nodes that are connected by edges. In initial models, the information at nodes and edges of the GNN was made _invariant_ with respect to the Euclidean group \(E(3)\) (i.e. the group of translations, rotations and inversions in Euclidean space), and the atomic representations were limited to scalar interatomic distances [22]. Such models have since been generally superseded by MPNN architectures built on convolution operations that are _equivariant_ with respect to the E(3) group. In equivariant approaches, isometric transformations on the relative atomic displacement vector inputs are propagated through the network to correspondingly transform the outputs. Equivariant approaches have been shown to achieve substantially improved data efficiency and un
precedented accuracy compared to their invariant counterparts [23; 24; 25]. In MPNNs, many-body interactions are captured by iteratively propagating information along the graph at each layer in the network. This has the effect of extending the local receptive field of an atom to significantly beyond the cutoff radius, which renders parallelization impractical [26]. Recently, a strictly local equivariant neural network approach has been proposed to address this drawback [26]. In this architecture, information is stored as a per-pair quantity, and instead of nodes exchanging information with its neighbours via edges, a convolution operation acts on the cutoff sphere in the form of a set of invariant (scalar) latent features and a set of equivariant (tensor) latent features that interact at each layer.
In this letter, we demonstrate that the strictly local E(3)-equivariant NN can be employed to compute the temperature-dependent thermal conductivity of germanium telluride (GeTe) in various phases using GK theory. GeTe is a chalcogenide material employed in many technological applications, such as phase change nonvolatile memory storage [27; 28], thermoelectricity [3; 29; 30] and spintronics [31; 32; 33]. It undergoes a ferroelectric phase transition from the low temperature rhombohedral \(\alpha\)-GeTe (spacegroup \(R3m\)) to a cubic \(\beta\)-GeTe (spacegroup \(Fm\bar{3}m\)) at a Curie temperature of \(T_{c}\approx 650-700\) K [34; 35; 36]. Amorphous GeTe also plays an important role in technological applications. Therefore, GeTe is an ideal prototype phase change material for the study of lattice thermal conductivity using GK theory.
The thermal conductivity tensor within GK theory is defined as:
\[\kappa_{\alpha\beta}(T)=\frac{1}{k_{\text{B}}T^{2}V}\lim_{\tau\to\infty}\int_ {0}^{\tau}dt\,\langle j_{\alpha}(t)\cdot j_{\beta}(0)\rangle_{T}, \tag{1}\]
where \(k_{\text{B}}\) is the Boltzmann constant, \(T\) the temperature, \(V\) the volume, \(j_{\alpha}(t)\) the \(\alpha\)-th Cartesian component of the macroscopic heat flux, and \(\langle j_{\alpha}(t)\cdot j_{\beta}(0)\rangle_{T}\) the heat flux autocorrelation function (HFACF), with the symbol \(\langle\cdot\rangle_{T}\) denoting ensemble average over time and over independent MD trajectories.
The total heat flux of a system of \(N\) atoms is defined
\[\mathbf{j}(t)=\sum_{i=1}^{N}\frac{d}{dt}\left(\mathbf{r_{i}}E_{i}\right), \tag{2}\]
where \(E_{i}=m_{i}\mathbf{v}_{i}^{2}/2+U_{i}\) is the total energy (i.e. kinetic and potential energy) of atom \(i\) with mass \(m_{i}\), velocity \(\mathbf{v}_{i}\) and atomic positions \(\mathbf{r_{i}}\). In MLIPs, the partitioning \(E=\sum_{i}E_{i}\) of the total energy of the system into atomic contributions \(E_{i}\) allows the total heat flux of a periodic system to be expressed as [11] :
\[\mathbf{j}(t)=\sum_{i=1}^{N}\mathbf{v}_{i}E_{i}-\sum_{i=1}^{N}\sum_{j\neq i}\mathbf{r}_{ ij}\left(\frac{\partial U_{i}}{\partial\mathbf{r}_{ij}}\cdot\mathbf{v}_{j}\right) \tag{3}\]
where the sum over \(j\) runs over the atoms that are within the cutoff radius \(r_{c}\) of atom \(i\) defined for the MLIP. We implemented the calculation of Eq. (3) in the LAMMPS code [37]. The term \(\partial U_{i}/\partial\mathbf{r}_{ij}\) is obtained by automatic differentiation of atomic energies \(U_{i}\) computed by the MLIP. It was also used for the calculation of the virial tensor [11; 38], which is required to perform simulations in the isothermal-isobaric (NpT) ensemble.
To generate the reference dataset to train the MLIP, _ab initio_ MD simulations based on density functional theory (DFT) were performed with temperatures ranging from 100 K to 2500 K using the VASP code [39; 40]. The generalized gradient approximation of Perdew-Burke-Ernzerhof (PBE) [41] was used for the exchange-correlation energy and Grimme's D3 dispersion correction [42] was applied. The supercells contained 192 and 216 atoms for the initial rhombohedral and cubic structures, respectively. Then, 6000 structures in total were taken from the MD trajectories and recomputed to obtain more accurate energy, forces, and stress tensors. We used an energy cutoff of 400 eV and a \(2\times 2\times 2\)\(k\)-mesh to sample the Brillouin zone. The equivariant NN model was trained on energy, forces and stress using the Allegro package [26]. The root mean squared errors (RMSE) and mean absolute errors (MAE) on the predicted energies, forces and stress tensors on the test dataset are 0.90 meV/atom, 29.87 meV/A, 0.28 meV/A\({}^{3}\) and 1.07 meV/atom, 42.97 meV/A, 0.37 meV/A\({}^{3}\), respectively (see Suppl. Mat. for more information on the training procedure and dataset partitioning).
To further validate the MLIP, the equilibrium geometries of crystalline GeTe were optimized using the MLIP. For \(\alpha\)-GeTe, the lattice parameter was \(a=4.42\) A and the angle \(\alpha=57.13^{\circ}\), close to DFT results of \(a=4.41\) A and \(\alpha=57.42^{\circ}\). Similarly, for \(\beta\)-GeTe, the MLIP yields \(a=4.24\) A, in excellent agreement with the lattice parameter from DFT of \(a=4.23\) A.
Moreover, the phonon dispersion from the MLIP is in excellent agreement with DFT for both \(\alpha\) and \(\beta\)-GeTe, as shown in Fig. 1. In particular, our model describes optical phonons well, which is usually challenging for MLIPs [19; 43]. Imaginary soft phonon modes in cubic GeTe are also well described by the MLIP, which is essential to capture the phase transition [44; 45]. These phonon dispersions were computed using the finite displacement method implemented in Phonopy [46] with \(3\times 3\times 3\) and \(5\times 5\times 2\) supercells of the conventional unit cells for cubic and rhombohedral phases, respectively. For the DFT calculations, we used the same settings as those used to generate the reference dataset. LO-TO splitting was not included in our calculations as long-range Coulomb interactions tend to be screened by free carriers in real samples [47].
We investigated the lattice dynamics of GeTe through MD simulations across the \(\alpha\to\beta\) phase transition with
our MLIP. For each temperature, GeTe supercells were first equilibrated for at least 200 ps in the NpT ensemble at ambient pressure with a 2 fs timestep in order to obtain the averaged temperature-dependent structural parameters shown in Fig. 2. The rhombohedral lattice parameter \(a\) and angle \(\alpha\) reach cubic values at \(T\approx 650\) K, in good agreement with experimental data.
By employing the temperature-dependent effective-potential (TDEP) method [48; 49; 50], the temperature-dependent interatomic force constants (IFCs) were extracted from a 600 ps MD simulation in the microcanonical ensemble, after equilibrating the system in the NVT ensemble using the structural parameters depicted in Fig. 2. By utilizing these IFCs, we computed phonon spectra as a function of temperature (refer to the Suppl. Mat. for more detailed information). Fig. 3 presents the evolution of the longitudinal and transverse optical phonon modes (\(\Gamma_{6}\) and \(\Gamma_{4}\), respectively) as a function of temperature. The softening of these two modes up to the Curie temperature is corroborated by previous theoretical studies [44; 45] and is comparable to experiments [51; 47; 52]. Beyond 650 K, the optical phonons merge, indicating the transition to the cubic phase where optical phonons exhibit three-fold degeneracy.
To compute the GK thermal conductivity of cubic, rhombohedral and amorphous GeTe, MD simulations with the MLIP were performed at different temperatures. The amorphous GeTe structure was generated using a melt-quench process (see Suppl. Mat.). The heat flux was calculated during MD simulations in the microcanonical ensemble and the ensemble average was performed over independent trajectories of at least 1 ns after equilibration in the NpT ensemble. After testing the convergence with respect to system size (see Suppl. Mat.), we used supercells containing 360 atoms for the rhombohedral phase and 512 atoms for the amorphous and cubic phases.
Figure 1: Comparison of phonon dispersions computed with DFT and with the MLIP of (a) \(\alpha\)-GeTe and (b) \(\beta\)-GeTe
Figure 3: Temperature evolution of A\({}_{1}\) and E optical phonon modes computed with the TDEP method and compared against experimental data from Ref. [51; 47; 52].
Figure 2: Evolution of (a) the lattice parameter \(a\) and (b) the angle \(\alpha\) as a function of temperature in the NpT MD simulations of crystalline GeTe, compared against experimental data from Ref. [34; 35; 36]. Simulated lattice parameters in (a) were shifted by \(-0.1\) Å.
Although cubic GeTe is metastable below \(T_{c}\), GK is able to determine its lattice thermal conductivity as it becomes dynamically stable at \(T\geq 300\) K (see finite temperature phonon spectra in Suppl. Mat.). Rhombohedral GeTe shows a higher thermal conductivity than cubic GeTe before 650 K (see Fig. 4) after which the two curves merge, reflecting the \(\alpha\rightarrow\beta\) phase transition.
The comparison against experiments is challenging because experimental values of lattice thermal conductivities of crystalline GeTe show a large dispersion. There are two reasons for this. First, thermal conductivity comprises a lattice contribution and an electronic contribution. Therefore, experimental lattice thermal conductivity is an indirect measurement, which is obtained by removing the electronic contribution, typically evaluated using the Wiedmann-Franz law that introduces an additional approximation from the Lorenz number. Second, the sample quality varies. Extrinsic scatterings due to defects may alter the thermal conductivity measurements. For example, an extra phonon-vacancy scattering has to be included in order to recover a good agreement with experimental data [54, 55]. Despite the significant experimental variations mentioned above, the calculated GK thermal conductivity values are found to fall within the range of experimental values.
The GK lattice thermal conductivity for the amorphous phase (solid green line) is in excellent agreement with the experimental data of Ref. [54] (green squares). This can be regarded as a direct comparison with the experiment since the electronic contribution to the thermal conductivity was found to be negligible in amorphous GeTe [56]. A previous study obtained a similar value of \(0.27\pm 0.05\) W\(\cdot\)m\({}^{-1}\cdot\)K\({}^{-1}\) at 300 K from GK simulations with a Behler-Parrinello-type MLIP [21]. The predicted thermal conductivity for amorphous GeTe is constant until \(\sim 450\) K. It then starts to increase, indicating a transition to a crystalline phase, as evidenced by the evolution of the radial distribution function (see Suppl. Mat.) and consistent with the amorphous-crystalline phase transition temperature observed experimentally [54].
To obtain the BTE thermal conductivity, we used the TDEP 2nd and 3rd order IFCs from MD simulations and a \(30\times 30\times 30\)\(q\)-mesh. This allows a direct comparison between GK and BTE as both calculations were on the same footing, with identical interatomic potential and the same temperature; the only difference being the thermal transport formalism. BTE overestimates the thermal conductivity by about 1.8 W\(\cdot\)m\({}^{-1}\cdot\)K\({}^{-1}\), which is about twice the GK result at 300 K, and about three times that at 900 K. Such overestimation is an indication that BTE cannot capture the strong anharmonicity exhibited by GeTe.
In conclusion, we developed an equivariant graph neural network interatomic potential to study thermal transport in amorphous and crystalline GeTe. The potential describes GeTe at a near-_ab initio_ level of accuracy for the rhombohedral, cubic and amorphous phases with a single model. Our potential also correctly captures phase transitions with Curie temperatures in good agreement with experimental data. Combined with the Green-Kubo theory, it can determine the lattice thermal conductivity not only for strongly anharmonic crystals, but also for the amorphous phase.
We thank F. Bottin and J. Bouchet for discussions about TDEP calculation. This work was performed using HPC/AI resources from GENCI-IDRIS (Grant 2022-A0110911995) and was partially funded by European commission through ECSEL-IA 101007321 project StorAIge and the French IPCEI program.
|
2310.00699 | Pianist Identification Using Convolutional Neural Networks | This paper presents a comprehensive study of automatic performer
identification in expressive piano performances using convolutional neural
networks (CNNs) and expressive features. Our work addresses the challenging
multi-class classification task of identifying virtuoso pianists, which has
substantial implications for building dynamic musical instruments with
intelligence and smart musical systems. Incorporating recent advancements, we
leveraged large-scale expressive piano performance datasets and deep learning
techniques. We refined the scores by expanding repetitions and ornaments for
more accurate feature extraction. We demonstrated the capability of
one-dimensional CNNs for identifying pianists based on expressive features and
analyzed the impact of the input sequence lengths and different features. The
proposed model outperforms the baseline, achieving 85.3% accuracy in a 6-way
identification task. Our refined dataset proved more apt for training a robust
pianist identifier, making a substantial contribution to the field of automatic
performer identification. Our codes have been released at
https://github.com/BetsyTang/PID-CNN. | Jingjing Tang, Geraint Wiggins, Gyorgy Fazekas | 2023-10-01T15:15:33Z | http://arxiv.org/abs/2310.00699v1 | # Pianist Identification Using Convolutional Neural Networks
###### Abstract
This paper presents a comprehensive study of automatic performer identification in expressive piano performances using convolutional neural networks (CNNs) and expressive features. Our work addresses the challenging multi-class classification task of identifying virtuoso pianists, which has substantial implications for building dynamic musical instruments with intelligence and smart musical systems. Incorporating recent advancements, we leveraged large-scale expressive piano performance datasets and deep learning techniques. We refined the scores by expanding repetitions and ornaments for more accurate feature extraction. We demonstrated the capability of one-dimensional CNNs for identifying pianists based on expressive features and analyzed the impact of the input sequence lengths and different features. The proposed model outperforms the baseline, achieving 85.3% accuracy in a 6-way identification task. Our refined dataset proved more apt for training a robust pianist identifier, making a substantial contribution to the field of automatic performer identification. Our codes have been released at [https://github.com/BetsyTang/PID-CNN](https://github.com/BetsyTang/PID-CNN).
performer identification, expressive piano performance, deep neural networks
## I Introduction
Performers, with their individual phrasing, dynamics, and interpretive choices, bring their personal aristry to each piece they play, resulting in distinguishable styles. Researchers who focus on studying expressive musical performances have been investigating computational models for performer identification [1, 2, 3, 4, 5]. A reliable pianist identifier holds great potential for not only studying the styles of different performers, but also various applications in music education, music information retrieval and smart musical instruments [6]. As an illustration, a pianist identification model could aid piano students wishing to emulate the performances of virtuoso pianists. With an upsurge in embedded devices, the vision of smart musical systems--ones that can discern different performers or styles and provide real-time feedback or adjustments--becomes closer to reality. Imagine a smart piano capable of tailoring its settings to mirror the nuances of iconic pianists, or a wearable accessory that offers pianists instant feedback, juxtaposing their performance against the masterpieces of legendary artists. Networked musical instruments could use style information or the features extracted by the proposed system in educational, retrieval or networked performance contexts, similar to those proposed by Turchet et.al. in [7]. These groundbreaking applications will not only resonate with the principles of Internet of Musical Things (IoMusT) [8] and the Internet of Audio Things (IoAuT) [9] but also elevate their potential, transforming basic devices into dynamic musical instruments with intelligence in the context of the Internet of Sounds (IoS) [10].
Automatic performer identification is usually regarded as a multi-class classification task where the system is designed to infer the performer of the given music performance. Early studies [1, 2] mainly applied traditional machine learning algorithms such as K-means clustering, decision trees, and discriminant analysis to this task. More recent research [3, 4] calculated the KL-divergence between performers' feature distributions and identified performer by performing similarity estimation based on the KL-divergence. Zhao et al. [11] utilised transfer learning for classifying violinists, adopting pre-trained models for music tagging and singer identification. With the emergence of large-scale expressive piano performance datasets [12, 13], two projects [5, 11, 12] recently applied deep learning techniques to pianist identification task. Rafee et al. [5] proposed a RNN-based hierarchical neural network for pianist identification. Zhang et al. [12] has applied convolutional neural networks (CNNs) to a 16-way pianist identification task, achieving less than 50% accuracy. However, this work paid insufficient attention to extracting expressive features which have been proven effective for deep neural networks that model expressiveness and performance styles of pianists [5, 14].
This paper details our exploration of the potential of CNNs in identifying virtuoso pianists using various expressive features. We obtained a subset consisting of both performance and score midis from the ATEPP dataset, refining the scores by extending the repetitions and ornaments in the corresponding midis, thus generating the most comprehensive and accurate dataset currently available for pianist identification. We conducted experiments to investigate the effectiveness of different expressive features and the impact of input sequence
lengths. The proposed one-dimensional CNN surpassed the baseline model [5], attaining an 85.3% accuracy for a 6-way identification task. In addition, our dataset was shown to be more suitable for training a robust pianist identifier compared to the one proposed previously [5].
The rest of this paper is organised as follows: Section II elaborates on the methodology, providing details of the dataset, the feature extraction process, and the model architecture. Section III outlines the experiment set-ups employed for model training. Section IV discusses the experiment results and the ensuing discussions. Lastly, Section V concludes the paper.
## II Methodology
### _Dataset_
As discussed by Rafee et al. [5], the lack of large datasets containing multiple performances of the same compositions by different pianists results in the lack of investigation in deep neural networks for pianist identification. However, the recent proposed expressive piano performance midi dataset, ATEPP [12], enabled us to create subsets which are balanced in the number of performances for six virtuoso pianists including Alfred Brendel, Claudio Arrau, Daniel Barenboim, Friedrich Gulda, Sviatoslav Richter, and Wilhelm Kempf. In our research, we consider two subsets as shown in the Table I:
1. _ID-400_: we created an updated version of the proposed subset by Rafee et al. [5] by removing corrupted transcription results as well as repeated performances following the latest version of the ATEPP dataset1. Footnote 1: [https://github.com/BetsyTang/ATEPP](https://github.com/BetsyTang/ATEPP)
2. _ID-1000_: we chosen a larger subset containing more compositions and performances by the same pianists to increase robustness and verify the capability of our model.
All movements in both subsets are by Beethoven or Mozart. Each movement corresponds to at least one performance by each pianist, making it possible to compare the differences in performance style of each individual performer. In order to maintain similar data distributions in training, validation, and testing sets, we divided the datasets alongside the number of performances of a composition by each pianist. To achieve a 8:1:1 train-valid-test split, we followed the Algorithm 1 to assign performances to _Train_, _Valid_ and _Test_ subsets. The Algorithm 1 is designed to guarantee that each split contains at least one performance of the composition by a performer, especially when there are fewer than 10 performances by that performer.
```
# Let \(C\) be the set of compositions, \(P\) be the set of pianists. # Info returns composition and pianist of a performance \(i\). # Count gives the number of performances in a set \(S\). # RandomSplit randomly splits a set \(S\) of size \(n\) into subset \(a\) of size \(rn\) and subset \(b\) of size \((1-r)n\), where \(r\in[0,1)\). # Random generates a number between \([0,1)\) following the uniform distribution. # \(\leftarrow\) means "assigned to". for(\(c\), \(p\)) in (\(C\), \(P\))do \(n=Count(I)\), where if \(i\in I\), Info\((i)=(c,p)\) if\(n\leq 1\)then\(Train\gets I\) elseif\(n=2\)then\(a,b=\)RandomSplit\((I,r=1/n)\) \(m=\)Random\(()\), \(Train\gets b\) if\(m\leq 0.5\)then\(Valid\gets a\) elseif\(m>0.5\)then\(Test\gets a\) endif elseif\(3\leq n\leq 9\)then\(a,b=\)RandomSplit\((I,r=\frac{1}{n})\) \(b,c=\)RandomSplit\((b,r=\frac{1}{n-1})\) \(Valid\gets a\), \(Test\gets b\), \(Train\gets c\) elseif\(10\leq n\)then\(a,b=\)RandomSplit\((I,r=\frac{4}{5})\) \(b,c=\)RandomSplit\((b,r=\frac{1}{2})\) \(Train\gets a\), \(Valid\gets b\), \(Test\gets c\) endif endfor
```
**Algorithm 1** Data Splitting
### _Score and Performance Alignment_
Inspired by previous research [1, 3, 5] focusing on pianist identification, we used an alignment algorithm proposed by Nakamura et al. [15] to establish correspondences between performance midi data and score midi data, which allowed us to extract performance-related features. While the algorithm exhibited promising results in most cases, it demonstrated limited capability in handling annotated repetitions and ornaments found in the scores. To address this limitation, we manually expanded the repetitions and added ornament notes to the score midi files, thereby enhancing the accuracy of the alignment results. The improved alignment results more accurately captured the nuances of performances, aiding in distinguishing among performers.
After performing the alignments, we proceeded to filter out two types of discrepancies: _missing notes_ (representing notes present in the scores but not successfully aligned to performances) and _extra notes_ (representing notes present in performances but not successfully aligned to scores). Then we quantified the extent of information loss caused by the
alignment algorithm for each performance, as captured by Equation 1:
\[\textit{Loss of Information}=\frac{N_{e}}{N_{p}}\times 100\%, \tag{1}\]
where \(N_{e}\) denotes the number of extra notes and \(N_{p}\) refers to the total number of notes in the performance.
The distributions of information loss in the datasets _ID-400_ and _ID-1000_ are presented in Fig. 1. Our analysis reveals that more than 95% of performances in both datasets exhibit less than 15% information loss.
### _Feature Extraction_
After aligning the performances and scores, we extracted input features following the process outlined in the study by Rafee et al. [5]. We derived deviations between the scores and performance for note-wise features, encompassing aspects such as timing and velocity. Beyond considering feature deviations, we also incorporated the original note-wise features as part of our input data. A full list of features used for our experiments are summarised in the Table II-C. Two note-level features are defined as follows: _Inter-onset Interval_ (IOI), representing the temporal duration between the onset times of two consecutive notes, and _Offset Time Duration_ (OTD), signifying the time interval between the offset time of a note and the onset time of its subsequent note. To process the features into suitable input for our model, we organized them into sequences, preserving the order of the notes. These sequences were then stacked together to create the final input. The resulting shape of the input would be (_batch size_, _sequence length_, _number of features_), as shown in the Fig. 2 at the left side.
To examine the performance of our model under circumstances of limited information, we divided the sequences into segments of varying lengths respectively. This allowed us to gauge the model's capacity to manage scenarios with limited data availability, detailed further in Section IV.
### _Model Architecture_
In light of the promising performance demonstrated by Convolutional Neural Networks (CNNs) in various classification tasks across different domains, we proposed a novel one-dimensional CNN model to address the pianist identification task. The architecture was determined through an empirical grid search, focusing on structural hyperparameters such as the number of layers and kernel size. The model architecture, depicted in Fig. 2, encompasses five convolutional layers followed by one dense layer, strategically designed to efficiently process the input data. All convolution layers are followed by a ReLU activation and a batch normalization layer. Dropout layers are added in order to avoid overfitting problem.
## III Experiments
We implemented our model using PyTorch [16], and monitored and recorded the experimental progress through the use of Wandb [17]. To achieve optimal model performance, we conducted an extensive hyperparameter tuning process using grid search. We specifically focused on parameters such as learning rate, weight decay, batch size, and the number of training epochs. This process was enhanced by leveraging the powerful capabilities of Wandb Sweeps. Consequently, our model underwent training with a batch size of 16 for a total of 1500 epochs, employing the Adam optimizer with an initial learning rate set to 8e-5 and a weight decay rate of 1e-7.
Our proposed model, which has only 6.1 million trainable parameters, showcasing remarkable efficiency. On average, a single experiment on a GeForce RTX 2080 Ti GPU takes
Fig. 1: Boxplots of information loss caused by the alignment process in _ID-400_ and _ID-1000_ datasets
Fig. 2: Model architecture of the proposed one-dimensional CNN
approximately 1.2 hours. This duration stands in stark contrast to the significantly lengthier training times encountered in the context of RNN-based hierarchical models, as proposed by Rafee et al. [5].
## IV Results
To thoroughly evaluate our proposed CNN model in addressing the pianist identification task, we conducted three studies. These studies examined the impacts of variable input sequence lengths, the diverse expressive features, and the datasets on the model's performance. To ensure a reliable assessment of the model, each experiment was repeated three to five times under consistent experimental settings. For a more straightforward comparison with the state-of-the-art [5], both Study I and II were conducted using the _ID-400_ dataset.
### _Study I: Effect of Varying Input Music Sequence Lengths_
The reliable identification of a pianist necessitates stable performance regardless of variations in the length of the musical input. We embarked on a series of experiments using all the features delineated in Section II-C to train our model. Experiments were conducted on complete musical pieces and segments of varying lengths, utilizing the _ID-400_ dataset. Mean values along with standard deviations pertaining to accuracy and F1-score for each experiment are tabulated in Table III. As inferred from the outcomes, our model demonstrated uniform high performance when dealing with sequences comprising 1000 notes or less. However, incorporating the full scope of performances substantially bolstered the model's performance as opposed to relying solely on performance segments. Furthermore, our model surpassed the benchmark set by the state-of-the-art RNN-based Hierarchical model [5] when we integrated more features into the training at both piece-wise and segment-wise levels. Our model attained a commensurate level of accuracy when trained with the same number of features as their study.
### _Study II: Effect of Different Input Features_
In order to investigate the impact of various input features, we elected five feature combinations and executed corresponding experiments on each group. These combinations are displayed in Table IV, where **D** symbolizes the usage of the deviation feature as a replacement for the original note-wise feature.
**C1** embodies 7 original note-wise features; **C2** omits the singular frequency-based feature, pitch, from **C1**; **C3** comprises only deviation features; **C4** replicates the same combination used in the study [5]; while **C5** incorporates all available features. Experiments were conducted on the _ID-400_ dataset utilizing music segments of 1000 notes. The mean accuracy from five iterations along with the standard deviation for each feature combination experiment is detailed in Table V.
The results highlight negligible differences when employing either note-wise features or deviation features in isolation for training the model. Incorporation of all the features collectively yields the optimal performance, suggesting that the model is more adept at identifying a performer's style when given the full set of related features. The comparison between **C1** and **C2** groups suggests that the frequency-based feature does not make a significant contribution to the identification process. Concurrently, the outcomes provide further evidence that the combination of velocity, duration, and IOI deviations proves to be a more reliable choice when solely utilizing deviation features, as discussed in [5].
### _Study III: Comparison between ID-400 with ID-1000_
Despite the implementation of a carefully designed data splitting algorithm, the relatively small size of the subset _ID-400_ impedes the creation of training, testing, and validation sets that maintain similar data distributions. Employing the
same algorithm, we generated five varied data splits for both the _ID-400_ and _ID-1000_ datasets, each of which underwent model testing. The Table VI presents the average test accuracy across all data splits, alongside the highest accuracy achieved by the best models on both datasets. Experiments were conducted using sequences of 1000 notes and 13 features. The outcomes, as outlined in the Table VI and Fig. 3, reveal in data splits, thereby improving the robustness in identifying the six pianists.
## V Conclusion
We presented our investigation of the application of convolutional neural networks to the pianist identification task. Our proposed convolutional neural network model shows promising results in identifying virtuoso pianists. Three studies were conducted, analysing the effects of varying input sequence lengths, the utilization of diverse expressive features, and the impacts of different datasets on the model's performance. Our findings suggest that our model performs best when handling complete musical performances rather than fragments, outperforming the state-of-the-art with 85.3% accuracy when integrating a larger set of features into the training phase. Our model uses less computational resource, leading to significant time savings during the training process compared with the state-of-art. In addition, training on our proposed larger _ID-1000_ dataset resulted in a model less sensitive to alterations in data splits, thereby improving the robustness in identifying the six pianists.
Our model serves as an exemplar for embedded systems that aspire to decode and respond to nuanced musical cues. Just as voice-operated devices discern users' vocal nuances, our proposed model distinguishes pianists based on their expressive nuances. There are numerous further applications of the technology in the IoS and IoMusT contxts, including the population of music related ontologies [18, 19, 20, 21] with performer identity or style related information.
Future work could extend these findings, utilising the proposed model to develop identifiers for more pianists. Such extensions will offer a more comprehensive understanding of pianist-specific performance characteristics, and enrich the applications of the current system. It would also be beneficial to evaluate the model's generalization abilities on unseen compositions.
|
2304.10515 | CP-CNN: Core-Periphery Principle Guided Convolutional Neural Network | The evolution of convolutional neural networks (CNNs) can be largely
attributed to the design of its architecture, i.e., the network wiring pattern.
Neural architecture search (NAS) advances this by automating the search for the
optimal network architecture, but the resulting network instance may not
generalize well in different tasks. To overcome this, exploring network design
principles that are generalizable across tasks is a more practical solution. In
this study, We explore a novel brain-inspired design principle based on the
core-periphery property of the human brain network to guide the design of CNNs.
Our work draws inspiration from recent studies suggesting that artificial and
biological neural networks may have common principles in optimizing network
architecture. We implement the core-periphery principle in the design of
network wiring patterns and the sparsification of the convolution operation.
The resulting core-periphery principle guided CNNs (CP-CNNs) are evaluated on
three different datasets. The experiments demonstrate the effectiveness and
superiority compared to CNNs and ViT-based methods. Overall, our work
contributes to the growing field of brain-inspired AI by incorporating insights
from the human brain into the design of neural networks. | Lin Zhao, Haixing Dai, Zihao Wu, Dajiang Zhu, Tianming Liu | 2023-03-27T03:59:43Z | http://arxiv.org/abs/2304.10515v1 | # CP-CNN: Core-Periphery Principle Guided Convolutional Neural Network
###### Abstract
The evolution of convolutional neural networks (CNNs) can be largely attributed to the design of its architecture, i.e., the network wiring pattern. Neural architecture search (NAS) advances this by automating the search for the optimal network architecture, but the resulting network instance may not generalize well in different tasks. To overcome this, exploring network design principles that are generalizable across tasks is a more practical solution. In this study, We explore a novel brain-inspired design principle based on the core-periphery property of the human brain network to guide the design of CNNs. Our work draws inspiration from recent studies suggesting that artificial and biological neural networks may have common principles in optimizing network architecture. We implement the core-periphery principle in the design of network wiring patterns and the sparsification of the convolution operation. The resulting core-periphery principle guided CNNs (CP-CNNs) are evaluated on three different datasets. The experiments demonstrate the effectiveness and superiority compared to CNNs and ViT-based methods. Overall, our work contributes to the growing field of brain-inspired AI by incorporating insights from the human brain into the design of neural networks.
Core-periphery Graph, Convolutional Neural Network, Image Classification.
## 1 Introduction
Convolutional neural networks (CNNs) have greatly reshaped the paradigm of image processing with impressive performances rivaling human experts in the past decade [1, 2, 3]. Though with a biologically plausible inspiration from the cat visual cortex [1, 4], the evolution and success of CNNs can be largely attributed to the design of network architecture, i.e., the wiring pattern of neural network and the operation type of network nodes. Early CNNs such as AlexNet [5] and VGG [6] adopted a chain-like wiring pattern where the output of the preceding layer is the input of the next layer. Inception CNNs employ an Inception module that concatenates multiple branching pathways with different operations [7, 8]. ResNets propose a wiring pattern \(x+F(x)\) aiming to learn a residual mapping that enables much deeper networks, and have been widely adapted for many scenarios such as medical imaging with superior performance and generalizability [9]. Orthogonally, depthwise separable convolution operation greatly reduces the number of parameters and enables extremely deeper CNNs [10]. Recent studies also suggest that CNNs can benefit from adopting convolution operation with large kernels (e.g., \(7\times 7\)) [11, 12] with comparable performance with Swin Transformer [13]. By combining dilated convolution operation and large convolution kernel, a CNN-based architecture can achieve state-of-the-art in some visual tasks [14].
Neural Architecture Search (NAS) advances this trend by jointly optimizing the wiring pattern and the operation to perform. Basically, NAS methods sample from a series of possible architectures and operations through various optimization methods such as reinforcement learning (RL) [15], evolutionary methods [16], gradient-based methods [17], weight-sharing [18], and random search [19]. Despite its effectiveness, NAS does not offer a general principle for network architecture design. The outcome of NAS for each run is a neural network instance for a specific task, which may not be generalized to other tasks. For example, an optimal network architecture for natural image classification may not be optimal for X-ray image classification. Hence, some studies explored the design space of neural architectures [20] and investigated the general design principles that can be applied to various scenarios.
Recently, a group of studies suggested that artificial neural networks (ANNs) and biological neural networks (BNNs) may share common principles in optimizing the network architecture. For example, the property of small-world in brain structural and functional networks are recognized and extensively studied in the literature [21, 22, 23]. In [24], the neural networks based on Watts-Strogatz (WS) random graphs with small-world properties yield competitive performances compared with hand-designed and NAS-optimized models. Through quantitative post-hoc analysis, [25] found that the graph structure of top-performing ANNs such as CNNs and multilayer perceptron (MLP) is similar to those of real BNNs such as the network in macaque cortex. [26] synchronized the activation of ANNs and BNNs and found that ANNs with higher performance are similar to BNNs in terms of visual representation activation. Together, these studies suggest the potential of taking advantage of prior knowledge from brain science to guide the architecture design of neural networks.
Motivated by these aspects, we explore a brain-inspired Core-Periphery (CP) principle for guiding the architecture design of CNNs. Core-Periphery organization is well-recognized in structural and functional brain networks of humans and other mammals [27, 28], which boosts the
efficiency of information segregation, transmission and integration. We illustrate the concept of Core-Periphery network organization in Figure 1. Core-core node pairs have the strongest connection in comparison to core-periphery node pairs (moderate) and periphery-periphery node pairs (the lowest). We design a novel core-periphery graph generator according to this property and introduce a novel core-periphery principle guided CNN (CP-CNN). CP-CNN follows a typical hierarchical scheme of CNNs (e.g., ResNet [9]) which consists of a convolutional stem and four consecutive blocks. For each block, we abandon the traditional chain-like wiring pattern but adopt a directed acyclic computational graph which is mapped from the generated core-periphery graph where each node corresponds to an operation such as convolution. In addition, we sparsify the convolution operation in a channel-wise manner and enforce it to follow a core-periphery graph constraint. The proposed CP-CNN is evaluated on CIFAR-10 dataset and two medical imaging datasets (INBreast, NCT-CRC). The experiments demonstrate the effectiveness of the CP-CNN, as well as its superior performance over state-of-the-art CNN and ViT-based methods.
The main contributions of our work are summarized as follows:
* We proposed a novel brain-inspired CP-CNN model which follows a core-periphery design principle and outperforms the state-of-the-art CNN and ViT baselines.
* We proposed a core-periphery graph-constrained convolution operation, which reduces the complexity of the model and improves its performances.
* Our work paves the road for future brain-inspired AI to leverage the prior knowledge from the human brain to inspire the neural network design.
## 2 Related Works
### _Neural Architecture of CNNs_
**Wiring Pattern.** The development of the wiring pattern significantly contributes to CNN's performance. The early neural architecture of CNNs adopted chain-like wiring patterns, such as AlexNet [5] and VGG [6]. Inception [7, 8] concatenates several parallel branches with different operations together to "widen" the CNNs. ResNets [9] propose a wiring pattern \(x+F(x)\) for residual learning, which eliminates the gradient vanishing and makes the CNNs much deeper. DenseNet adopted a wiring pattern \([x,F(x)]\) which concatenates the feature maps from the previous layer. The wiring pattern of ResNet and DenseNet is well generalized in various scenarios and applications with improved performances.
**Sparsity in Convolution Operation.** Early CNNs used dense connectivity between input and output features, where every output feature is connected to every input feature. To reduce the parameter of such dense connectivity, depthwise separable convolution [10] was proposed to decompose the convolution operation as depthwise convolution and pointwise convolution, enabling much deeper CNNs. Another group of studies explored the pruning-based method to introduce sparsity in convolution operation, including channel pruning [29], filter pruning [30, 31], structured pruning [32]. The introduced sparsity reduced the number of parameters, making the networks easier to train, and also improved their performance on various tasks [33].
**Neural Architecture Search.** NAS jointly optimizes the wiring pattern and the operation to perform. NAS methods predefined a search space, and a series of possible architectures and operations are sampled and selected based on various optimization methods such as reinforcement learning (RL) [15], evolutionary methods [16], gradient-based methods [17], weight-sharing [18], and random search [19]. However, the predefined search space still limited the feasible neural architectures to be sampled, regardless of the optimization methods. Meanwhile, the search process usually demands huge computational resources, while the searched architecture may not generalize well for different tasks.
### _Core-Periphery Structure_
Core-periphery structure represents a relationship between nodes in a graph where the core nodes are densely connected with each other while periphery nodes are sparsely connected to the core nodes and among each other [34, 35]. Core-periphery graph has been applied in a variety of fields, including social network analysis [34, 36], economics [37], biology such as modeling the structure of protein interaction networks [38]. In the brain science field, it has been shown that brain dynamics has a core-periphery organization [27]. The functional brain networks also demonstrate a core-periphery structure [28]. A recent study revealed the core-periphery characteristics of the human brain from a structural perspective [39]. It is shown that gyri and sulci, two prominent cortical folding patterns, could cooperate as a core-periphery network which improves the efficiency of information transmission in the brain [39].
### _Connection of ANNs and BNNs_
Recently, a group of studies suggested that artificial neural networks (ANNs) and biological neural networks (BNNs) may share some common principles in optimizing the network architecture. For example, the property of small-world in brain structural and functional networks are recognized
Fig. 1: Core-periphery graph. The core nodes are denoted by red color, and the periphery nodes are denoted by blue color.
and extensively studied in the literature [21, 22, 23]. Surprisingly, in [24], the neural networks based on Watts-Strogatz (WS) random graphs with small-world properties yield competitive performances compared with hand-designed and NAS-optimized models. Through quantitative post-hoc analysis, [25] found that the graph structure of top-performing ANNs such as CNNs and multilayer perceptron (MLP) is similar to those of real BNNs such as the network in macaque cortex. [26] synchronized the activation of ANNs and BNNs and found that ANNs with higher performance are similar to BNNs in terms of visual representation activation. Together, these studies suggest the potential of taking advantage of prior knowledge from brain science to guide the model architecture design.
## 3 Methodology
In this section, we introduce the generation of the core-periphery graph and present the details of the CP-CNN framework, including the network architecture of CP-CNN, the construction of core-periphery block (CP-Block), and core-periphery constrained convolution operation.
### _Generation of Core-periphery Graph_
The core-periphery graph (CP graph) has a fundamental signature that the "core-core" node pairs have the strongest interconnections compared with the "core-periphery node" pairs (moderate) and "periphery-periphery" node pairs (weakest). According to this property, we introduce a novel CP graph generator to produce a wide spectrum of CP graphs in this subsection.
Specifically, the proposed CP graph generator is parameterized by the total number of nodes \(n\), the number of "core" nodes \(n_{c}\), and the wiring probabilities \(p_{cc}\), \(p_{cp}\), \(p_{pp}\) between "core-core", "core-periphery", "periphery-periphery" node pairs, respectively. The CP graph is generated based on the following process: for each "core-core" node pair, we sample a random number \(r\) from a uniform distribution on \([0,1]\). If the wiring probability \(p_{c}c\) is greater than the random number \(r\), the "core-core" node pair is connected. The same procedure is also applied to "core-periphery" node pairs and "periphery-periphery" node pairs with the wiring probability \(p_{cp}\) and \(p_{pp}\), respectively. We summarize the whole generation process in Algorithm 1. With different combinations of \(n\), \(n_{c}\) and wiring probabilities \(p_{cc}\), \(p_{cp}\), \(p_{pp}\), we can generate a wide range of CP graphs in the space, which are then used for constructing the CP-CNN framework introduced in the following subsections.
```
Input:\(n\): number of nodes; \(n_{c}\): number of core nodes; \(p_{cc}\), \(p_{cp}\), \(p_{pp}\): wiring probabilities Output:\(G\): core-periphery graph \(G=\emptyset\); // "core-core" node pairs for\(i\gets 0\)to\(n_{c}\)do for\(j\gets i\)to\(n_{c}\)do Sample a uniform random number \(r\in[0,1)\) if\(r<p_{cc}\)then \(G\leftarrow(i,j)\) end if end for end for // "core-periphery" node pairs for\(i\gets 0\)to\(n_{c}\)do for\(j\gets n_{c}\)to\(n\)do Sample a uniform random number \(r\in[0,1)\) if\(r<p_{cp}\)then \(G\leftarrow(i,j)\) end if end for end for // "periphery-periphery" node pairs for\(i\gets n_{c}\)to\(n\)do for\(j\gets i\)to\(n\)do Sample a uniform random number \(r\in[0,1)\) if\(r<p_{pp}\)then \(G\leftarrow(i,j)\) end if end for end for
```
**Algorithm 1**Generation of core-periphery graph
### _CP-CNN Framework_
Our macro design of CP-CNN architecture follows a typical hierarchical scheme of CNNs (e.g., ResNet [9]) with a convolutional stem and several convolution blocks (Figure 2(a)). Specifically, the input image is firstly input into a convolution stem which consists of two \(3\times 3\) convolutions with a stride of 2. The feature maps from the convolution stem are then processed by four consecutive core-periphery blocks (CP-Blocks, discussed in detail in Section 3.3 below). Within each CP-Block, the size of the feature map is decreased by \(2\times\) while the number of channels is increased by \(2\times\). A classification head with \(1\times 1\) convolution, global average pooling and a fully connected layer is added after the CP-Block to produce the final prediction.
### _Core-periphery Block_
Unlike the traditional chain-like structure, our core-periphery block has a "graph" structure (Figure 2(b)) which is implemented based on the generated core-periphery graph. To construct the core-periphery block, we need to convert the generated core-periphery graph into computational graph in the neural network. However, the generated core-periphery graphs are undirected while the computational graph in neural networks are directed and acyclic. So the first step is to convert the generated core-periphery graph into a directed acyclic graph (DAG), and then map the DAG into a computation graph for the CP-Block.
Specifically, we adopt a heuristic strategy to perform such conversion. For each node in the core-periphery graph, we randomly assign a unique label ranging from 1 to \(n\) (the number of nodes in the graph) to it. Then, for all undirected edges in the graph, we convert it into directed edges which always start from the node with the small label and end with the node having the large label. This approach guarantees that there are no circles in the resulting directed graph, i.e.,
the resulting graph is a DAG. The next step is to map the DAG into a computational graph in the neural network. To do so, we first need to define the node and edge in the computational graph.
**Edges.** Similar to edges in most computation graphs, we define that the directed edge in our implementation represents the direction of data flow, i.e., the node sends the data to another node along this flow.
**Nodes.** We define the nodes in our computational graph as processing units that aggregate and process the data from input edges and distribute the processed data to other nodes along the output edges. As illustrated in Figure 2(c), the data tensors along the input edges are firstly aggregated through a weighted sum. The weights of the aggregation are learnable. Then, the combined tensors are processed by an operation unit which consists of ReLU activation, \(3\times 3\) core-periphery convolution (discussed in detail in Section 3.4 below), and batch normalization. The unit's output is distributed as the same copies to other nodes along the output edges.
Using the defined nodes and edges, we obtain an intermediate computational graph. However, this graph may have several input nodes (those without input edges) and output nodes (those without output edges), while each block is expected to have only one input and one output. To address this, we introduce an additional input node that performs convolution with a stride of 2 on the previous block's output or the convolution stem, sending the same feature maps to all original input nodes. Similarly, we introduce an output node that aggregates the feature maps from all original output nodes using a learnable weighted sum, without performing any convolution within this node. This creates the CP-Block, which can be stacked in the CP-CNN as previously discussed.
### _Core-periphery Constrained Convolution_
The CP-Block can also be constructed using conventional convolution in the nodes of the computational graph. However, traditional convolution is more "dense" whereas incorporating sparsity into the neural network can significantly lower its complexity and enhance its performance, especially in scenarios with limited training samples such as medical imaging.
Inspired by this, we propose a novel Core-Periphery Constrained Convolution that utilizes a core-periphery graph as a constraint to sparsify the convolution operation. Specifically, we divide the input and output channels of the convolution into \(n\) groups and represent the relationship between them as a bipartite graph (Figure 2(c)). In conventional convolution, the bipartite graph is densely connected, with all input channels in a filter contributing to the production of all output channels. For example, output
Fig. 2: Illustration of the proposed CP-CNN framework. (a) The architecture of the CP-CNN with one convolution stem, four consecutive CP-Blocks, followed by one \(1\times 1\) convolution, one pooling and one fully-connected layer. (b)The construction of CP-Block and the illustration of the node in CP-Block. The core-periphery graph is mapped as a computational graph for CP-Block based on the node operation. (c) Utilizing core-periphery graph to constrain the convolution operation.
channels in node \(\#1\) integrate information from all input channels. In contrast, a sparse bipartite graph means that only a portion of input channels is used to generate output channels. As shown in Figure 2(c), the output channels in node \(\#1\) only integrate information from input channels in node \(\#1\) and node \(\#2\). By sparsifying the convolution operation with a predefined bipartite graph, the convolution is constrained by a graph.S
We use the core-periphery graph as a constraint by converting the generated graph into a bipartite graph. The core-periphery graph is first represented as the relational graph proposed in [25] which represents the message passing between nodes. The relational graph is then transformed into a bipartite graph, where the nodes in two sets correspond to the divided sets of input and output channels, respectively. The edges in the bipartite graph represent message passing in the relational graph. We apply the resulting bipartite graph as a constraint to the convolution operation to obtain the core-periphery constrained convolution. It is worth noting that we apply the same core-periphery graph across the whole network, while the constrained convolution may vary among different nodes and blocks due to the varying number of channels.
## 4 Experiments
**Datasets.** We evaluate the proposed framework on three datasets, including one for natural images and two for medical images. **CIFAR-10**[40] consists of 60,000 \(32\times 32\) images in 10 classes, with 50,000 images in the training set and 10,000 images in the test set. In our experiments, we upsample all original images in CIFAR-10 to \(224\times 224\). **NCT-CRC**[41] contains 100,000 non-overlapping training image patches extracted from hematoxylin and eosin (H&E) stained histological images of human colorectal cancer (CRC) and normal tissue [41]. Additional 7,180 image patches from \(50\) patients with no overlap with patients in the training set are used as a validation set. Both training and validation sets have 9 classes and size of \(224\times 224\) for each patch. **INbreast** dataset [42] includes 410 full-field digital mammography images collected during low-dose X-ray irradiation of the breast. These images can be classified into normal (302 cases), benign (37 cases), and malignant (71 cases) classes. We randomly split the patients into 80% and 20% as training and testing datasets. To balance the training dataset, we perform several random cropping with a size of \(1024\times 1024\) as well as the contrast-related augmentation for each image, resulting in 482 normal samples, 512 benign mass samples, and 472 malignant mass samples. The images in both sets are downsized into \(224\times 224\).
**Implementation Details.** In our experiments, we set the number of nodes in the core-periphery graph to 16 and vary the number of core nodes. The three probabilities are set as \(p_{cc}=0.9\), \(p_{cp}=0.5\), \(p_{cc}=0.1\). The proposed model and all compared baselines are trained for 50 epochs with a batch size of 512. We use the AdamW optimizer [43] with \(\beta_{1}=0.9\) and \(\beta_{2}=0.999\) and a cosine annealing learning rate scheduler with initial learning \(10^{-4}\) and 5 warm-up epochs. The framework is implemented with PyTorch ([https://pytorch.org/](https://pytorch.org/)) deep learning library and the model is trained on 4 NVIDIA A5000 GPU.
### _Comparison with Baselines_
To validate the proposed CP-CNN, we compare the performance of CP-CNN with various state-of-the-art baselines, which can be roughly categorized as CNN-based methods and ViT-based methods. CNN-based category contains ResNet [9], EfficientNet [44], RegNet [20], ConvNeXt [12]. The ViT-based class contains vanilla ViT [45], CaiT [46] and Swin Transformer [13]. Considering the amount of data, we set the number of nodes in the CP graph as 16, resulting CP-CNN model with around 22 million parameters. For the compared methods, we re-implement them and only report the tiny- or small-scale setting with comparable parameters as CP-CNN.
Table I demonstrates a comprehensive comparison of the Top-1 classification accuracy (%) achieved by different models on three datasets, as well as the number of parameters and flops. It is observed that CNNs generally exhibit superior performance compared to ViTs. This can be attributed to the inductive biases in CNNs, which are essential in scenarios with a limited number of training samples. This is also suggested by the observation that SwinV2-T, which incorporates inductive biases, outperforms other ViT models.
Our proposed CP-CNN model achieves state-of-the-art performance compared to other CNN-based methods, demonstrating its superiority in terms of accuracy. Specifically, our CP-CNN outperforms baseline models in all settings on the CIFAR-10 dataset. For the NCT-CRC dataset, our CP-CNN model achieves higher accuracy compared to both CNNs and ViTs, except for sparse settings with only 2 or 4 core nodes. Furthermore, on the INBreast dataset, our sparse CP-CNN model with 2 core nodes achieves state-of-the-art performance. Importantly, our CP-CNN model's superior performance is achieved while requiring a comparable number of parameters and flops as other models. Thus, the proposed CP-CNN can be a promising solution for image classification tasks, offering both high accuracy and efficiency.
It is also noted that our CP-CNN model outperforms the RegNet model which is also based on the exploration of design space of neural architecture. This indicates that the brain-inspired core-periphery design principle may be more generalized than the empirical design principles as those in RegNet.
### _Sparsity of CP Graph_
The number of core nodes \(c\) controls the sparsity of the generated CP graph. More core nodes indicate the dense connections in the CP graph. In this subsection, we investigate the effects of CP graph sparsity on classification performances.
As illustrated in Table I, we fix the number of total nodes to 16 and vary the number of core nodes from 2 to 14 in steps of 2, resulting in graph sparsity ranging from 0.125 to 0.875 with an interval of 0.125. For the CIFAR-10 dataset, we observed an increase in classification accuracy with the increase in the number of core nodes, reaching a peak with
14 core nodes (sparsity=0.875). It is probably because a dense graph increases the capacity of the CP-CNN model, so it can represent more complex relationships. In contrast, for the INBreast dataset, the sparsest CP-CNN model (2 core nodes, sparsity=0.125) yields the best performances. This may be due to the dataset having only thousands of training samples. A large and dense model may suffer from the overfitting problem, which reduces performance. For the NCT-CRC dataset, the performance increased with sparsity, with the highest accuracy achieved at a sparsity of 0.5. The accuracy slightly decreased with a more dense graph. This may be because a sparse model with low capacity may not be able to represent the complex relationships in the dataset, while a dense model may overfit. At a sparsity of 0.5, the right balance between model capacity and dataset complexity was achieved. Overall, the sparsity of the CP Graph can affect the capacity of the CP-CNN model and, thus, the performances on different datasets. Despite this, the CP-CNN model still has comparable and superior performances compared with baseline models.
### _Comparison with Random Graphs_
To validate the effectiveness of the CP graph, we replace the CP graph in the CP-CNN model with two random graphs: Erdos-Renyi (ER) graph [48] and Watts-Strogatz (WS) graph [49]. ER graph is parameterized by \(P\), which is the probability that two nodes are connected by an edge. WS graph is considered to have the small-world property. In our experiment, we randomly generate 10 samples for ER, WS, and CP graphs with the same sparsity. In Figure 3, we demonstrate the average classification accuracy across the 10 samples for different graphs and sparsity.
It is observed that the CP graph with a sparsity of 0.125 outperforms all other settings and graphs on the INBreast dataset, whereas on other sparsity settings, different graphs achieve the best accuracy. For the NCT-CRC dataset, the CP graph outperforms the ER and WS graphs with sparsity values of 0.375, 0.5, and 0.625, and achieves the highest accuracy among all settings and graphs with a sparsity of 0.5. These results suggest that
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Category & Models & CIFAR-10 [40] & NCT-CRC [41] & INBreast [42] & Param (M) & Flops (G) \\ \hline \multirow{8}{*}{CNNs} & ResNet-18 [9] & 90.35 & 95.96 & 83.56 & 11.18 & 1.8 \\ & ResNet-50 [9] & 90.55 & 95.11 & 82.19 & 23.53 & 4.1 \\ & EfficientNet-B3 [44] & 82.52 & 95.25 & / & 10.71 & 1.8 \\ & EfficientNet-B4 [44] & 81.73 & 95.21 & / & 17.56 & 4.4 \\ & ResNet-Y016 & 88.03 & 95.91 & / & 10.32 & 1.6 \\ & RegNetX-032 & 88.78 & 95.91 & / & 14.30 & 3.2 \\ & ConvNet-Nano [12] & 86.88 & 95.10 & / & 14.96 & 2.5 \\ & ConvNet-Tiny [12] & 86.32 & 94.64 & / & 27.83 & 4.5 \\ \hline \multirow{4}{*}{VIFs} & ViT-Tiny [45] & 76.10 & 90.63 & / & 5.50 & 1.3 \\ & ViT-Small [45] & 69.37 & 89.79 & / & 21.67 & 4.6 \\ & CaiT-XOS-24 [46] & 73.99 & 92.06 & / & 11.77 & 2.5 \\ & CaiT-XOS-36 [46] & 74.36 & 92.41 & / & 17.11 & 3.8 \\ & SwinV2-T [47] & 81.76 & 95.61 & / & 27.57 & 5.9 \\ \hline \multirow{8}{*}{CP-CNN} & N=16, C=2 & 91.22 & 95.28 & **85.75** & 22.21 & 3.4 \\ & N=16, C=4 & 91.71 & 95.43 & 82.19 & 22.21 & 3.4 \\ \cline{1-1} & N=16, C=6 & 91.99 & 96.34 & 83.01 & 22.21 & 3.4 \\ \cline{1-1} & N=16, C=8 & 92.41 & 96.78 & 83.01 & 22.21 & 3.4 \\ \cline{1-1} & N=16, C=10 & 94.43 & 96.65 & 83.28 & 22.21 & 3.4 \\ \cline{1-1} & N=16, C=12 & 92.54 & 96.29 & 83.56 & 22.21 & 3.4 \\ \cline{1-1} & N=16, C=14 & **92.65** & 96.60 & 84.11 & 22.21 & 3.4 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Top-1 classification accuracy (%) of proposed and compared models on the CIFAR-10, NCT-CRC, and INBreast datasets, along with the number of parameters (M) and flops (G). The models with the highest accuracy are highlighted in **bold**. For some settings, the models do not get converged and are indicated by a slash (/) symbol.
Fig. 3: The comparison of ER, WS, and CP graph with varying sparsity based on the CP-CNN model, in terms of accuracy, using the INBreast and NCT-CRC datasets.
and sparsity can significantly affect the performance of the CP-CNN model on different datasets. However, with specific sparsity settings, CP graph can provide superior performance compared to ER and WS graphs, i.e., the CP graph has an upper performance bound than ER and WS graphs.
In addition, the CP-CNN models based on ER and WS graphs also have competitive performances than the CNNs and ViTs in Table I, highlighting the potential of incorporating graph structures in CNNs for improving their performance and generalization ability.
## 5 Discussion
**Brain-inspired AI.** The brain is a highly complex network of interconnected neurons that communicate with each other to process and transmit information. Core-periphery property is a representative signature of the brain network. The results reported in the study suggest that incorporating the properties/principles of brain networks can effectively improve the performance of CNNs. Our study provides a promising solution and contributes to brain-inspired AI by leveraging the prior knowledge of the human brain to inspire the design of ANNs.
**Limitations.** The sparsity of the CP graph can affect the capacity of CP-CNN model. The experiments demonstrated that the optimal capacity of the CP-CNN model may vary depending on the dataset and the specific problem being solved. Line and grid search may help us to determine the optimal sparsity for different datasets. However, how to effectively search the optimal sparsity is still an opening question. In addition, the proposed CP-CNN model is evaluated at a scale of 22 million parameters, which is suitable for relatively small datasets, especially those in medical imaging scenarios. The performances of a larger-scale CP-CNN model on a larger dataset, such as ImageNet-1K will be investigated in the future.
## 6 Conclusion
In this study, we explored a novel brain-inspired core-periphery design principle to guide the design of CNNs. The core-periphery principle was implemented in both the design of network wiring patterns and the sparsification of the convolution operation. The experiments demonstrate the effectiveness and superiority of the CP principle-guided CNNs compared to CNNs and ViT-based methods. In general, our study advances the growing field of brain-inspired artificial intelligence by integrating prior knowledge from the human brain to inspire the design of artificial neural networks.
|
2302.09321 | Implicit Solvent Approach Based on Generalised Born and Transferable
Graph Neural Networks for Molecular Dynamics Simulations | Molecular dynamics (MD) simulations enable the study of the motion of small
and large (bio)molecules and the estimation of their conformational ensembles.
The description of the environment (solvent) has thereby a large impact.
Implicit solvent representations are efficient but in many cases not accurate
enough (especially for polar solvents such as water). More accurate but also
computationally more expensive is the explicit treatment of the solvent
molecules. Recently, machine learning (ML) has been proposed to bridge the gap
and simulate in an implicit manner explicit solvation effects. However, the
current approaches rely on prior knowledge of the entire conformational space,
limiting their application in practice. Here, we introduce a graph neural
network (GNN) based implicit solvent that is capable of describing explicit
solvent effects for peptides with different composition than contained in the
training set. | Paul Katzberger, Sereina Riniker | 2023-02-18T12:47:23Z | http://arxiv.org/abs/2302.09321v1 | Implicit Solvent Approach Based on Generalised Born and Transferable Graph Neural Networks for Molecular Dynamics Simulations
###### Abstract
Molecular dynamics (MD) simulations enable the study of the motion of small and large (bio)molecules and the estimation of their conformational ensembles. The description of the environment (solvent) has thereby a large impact. Implicit solvent representations are efficient but in many cases not accurate enough (especially for polar solvents such as water). More accurate but also computationally more expensive is the explicit treatment of the solvent molecules. Recently, machine learning (ML) has been proposed to bridge the gap and simulate in an implicit manner explicit solvation effects. However, the current approaches rely on prior knowledge of the entire conformational space, limiting their application in practice. Here, we introduce a graph neural network (GNN) based implicit solvent that is capable of describing explicit solvent effects for peptides with different composition than contained in the training set.
## 1 Introduction
Molecular dynamics (MD) simulations employ Newton's equation of motion to study the dynamics of (bio)molecular systems [1]. In recent years, MD has not only become a pivotal tool for the investigation of biomolecular processes such as membrane permeation of drug molecules or protein folding [2] but also accelerated and supported drug discovery [3] The conformational ensembles of molecules are strongly influenced by the surrounding medium. While intramolecular hydrogen bonds (H-bonds) are favoured in vacuum and apolar solvents (e.g., chloroform), they are generally disfavoured in polar solvents (e.g., water) [4]. Such effects of the (local) environment can be incorporated by explicitly simulating solvent molecules. This approach provides good accuracy since it includes both short-range and long-range interactions. Both contributions are needed in order to describe an accurate conformational ensemble [5]. However, explicit-solvent simulations come at the cost of substantially increasing the number of degrees of freedom (DOF) in the system, which results in substantially higher computational costs as well as slower effective sampling [6]. In addition, the potential energy of a single solute conformation is no longer an instantaneous property in explicit-solvent simulations as an infinite number of solvent configurations (i.e., arrangement of the solvent molecules around the solute) exist for a single solute conformation. Thus, the prediction of the potential energy of a single conformer requires integrating out the contributions of individual solvent configurations [5].
To simultaneously retain the instantaneously of the potential energy and to reduce the number of DOF in the system, implicit-solvent methods have been developed [7]. These approaches aim at modelling the solute-solvent interactions in an implicit manner by predicting the mean solvation energy and forces for a given solute conformation [6]. The most common implicit-solvent approach replaces the solvent by a continuum electrostatic. Examples are Poisson-Boltzmann (PB) [8], generalised Born (GB) [9], or fast analytical continuum treatments of solvation (FACTS) [10]. Note that GB and FACTS models are approximations of the PB model. However, current implicit-solvent models do not accurately reproduce the short-range effects of solvent molecules, and thus often do not reproduce the secondary
structures of peptides correctly [11]. Only very recently, the use of machine learning (ML) approaches have started to be explored for implicit-solvent models. Chen _et al._[12] used graph neural networks (GNN) to reproduce the potential energies and forces of two small peptides. This method is, however, not yet practical because the full conformational ensemble needs to be generated first via explicit-solvent simulations, before the GNN model can be trained to reproduce it. The GB models remain therefore the most commonly used implicit solvent approach to date.
The GB equation was first introduced by Still _et al._[9] and defines the solvation free energy \(\Delta G\) in terms of the atomic partial charges \(q_{i}\) and \(q_{j}\), the distance \(r_{ij}\), and the effective Born radii \(R_{i}\) and \(R_{j}\) of two atoms \(i\) and \(j\).
\[\Delta G=-\frac{1}{2}\left(\frac{1/\epsilon_{in}}{\epsilon_{out}}\right)\sum _{i,j}\frac{q_{i}q_{j}}{\sqrt{r_{ij}+R_{i}R_{j}exp\left(\frac{-r_{ij}^{2}}{4R_ {i}R_{j}}\right)}} \tag{1}\]
The Born radii are further calculated using the Coulomb integral \(I_{i}\) and the intrinsic radius \(\rho_{i}\),
\[R_{i}=(\rho_{i}^{-1}-I_{i})^{-1}. \tag{2}\]
The Coulomb integral can be derived from the Coulomb field approximation (CFA) and the intrinsic radius,
\[I_{i}=\frac{1}{4\pi}\int_{\Omega,r>\rho_{i}}\frac{1}{r^{4}}d^{3}r. \tag{3}\]
Typically, the integral is solved analytically by using a pairwise de-screening approximation [13]. While the functional form of different GB models is the same, the manner in which this integral is calculated distinguishes them: GB-HCT [13], GB-OBC [14], or GB-Neck [15]. In addition, also ML has been proposed to directly approximate reference born radii calculated by PB calculations [16, 17].
The calculation of the effective Born radii in standard GB models can be thought of as a one-pass message passing algorithm, where information is sent from each node to all other nodes within a cutoff. GNNs are therefore a natural choice to develop GB based models further. When multiple passes are used, GNN can aggregate information, intrinsically encode the geometric environment, and thereby go beyond the pairwise de-screening approximation. In general, it is mainly the local environment, dominated by short-range interactions, which is expected to benefit from this description, as the GB models can describe long-range interactions well. Therefore, the robustness and quality of a GNN based implicit solvent could be enhanced by using a \(\Delta\)-learning [18] approach rather than predicting the solvation forces directly with the GNN. This means that rather than replacing the GB model entirely, a ML correction could be added to a base model (i.e., GB-Neck). In related fields such as ML for QM/MM simulations of condensed-phase systems, a similar approach has been demonstrated to lead to stable simulations [19]. In addition, the \(\Delta\)-learning scheme could allow smaller cutoffs for the GNN as the long-range interactions are already well described by the GB model, reducing the computational cost.
Here, we use this idea to develop a ML based implicit solvent approach, which can be used to simulate molecules without the need of a complete conformational ensemble from explicit-solvent simulations for training. The training set for our ML approach consists of subsets of conformers extracted from explicit-solvent simulations (here with the TIP5P water model [20]), for which the mean solvation forces are calculated by keeping the solute position constrained and averaging over a multitude of solvent configurations. Note that this procedure to generate the solvation forces differs from the one proposed by Chen _et al._[12] as here the averaging is performed over multiple solvent configurations. The resulting mean solvation forces are used in a next step to train the GNN.
To assess the performance of our approach, test systems were chosen that are (a) interesting for the application of implicit solvents, (b) challenging for current implicit solvents, (c) have fast kinetics, and (d) can be analysed directly without dimensionality reduction or Markov state modelling. The aforementioned criteria are met by a class of small peptides, which are able to form a salt bridge. The importance of salt bridges for the stability of proteins [21] makes them an interesting target for implicit solvent models, while previous studies by Nguyen _et al._[22] have shown that GB-based implicit solvents could not accurately describe them. Although the parameters of the GB-Neck2 model [22] have been manually adjusted such that the height of the energy barrier matched a TIP3P solvent [23] simulation,
this implicit solvent failed to reproduce other key characteristics of the system such as the position of the free-energy minimum of the salt bridge or effects attributed to a single explicit water molecule between the opened salt bridge.
In this study, we study similar small peptides featuring a salt bridge between lysine (K, LYS) and glutamic acid (E, GLU) connected via two alanine (A, ALA) and one variable residue forming the peptides KAXAE, with X being valine (V, VAL), leucine (L, LEU), isoleucine (I, ILE), phenylalanine (F, PHE), serine (S, SER), threonine (T, THR), tyrosine (Y, TYR), and proline (P, PRO). In addition, we also test the approach on the same peptide KAAE as in Ref. [22], together with the longer variants KAAAE and KAAAAE. The performance is compared to the state-of-the-art implicit solvent GB-Neck2 as well as explicit-solvent simulations with TIP3P and TIP5P. To explore the generalizability and transferability characteristics of the GNN, the model is challenged to simulate peptides outside of the training set.
## 2 Methods
### Molecular Dynamics Simulations
Starting coordinates and topologies were generated using the AmberTools21 [24] software package. The amino acids and capping groups were parametrised using the AMBER force field ff99SB-ILDN [25]. All simulations were performed using OpenMM (version 7.7.0) [26]. For the explicit-solvent simulations, the peptides were solvated in a box of water (TIP3P or TIP5P) with a padding of 1 nm. For all systems, an energy minimisation using the L-BFGS algorithm [27] was performed with a tolerance of 10 kJ mol\({}^{-1}\) nm\({}^{-1}\). All bonds involving hydrogens were constrained using the SETTLE [28] and CCMA [29] algorithms for water and peptide bonds, respectively. For all simulations, Langevin dynamics were used with the LFMiddle discretization scheme [30]. For the explicit-solvent simulations, a cutoff of 1 nm for the long-range electrostatic interactions was used together with the particle mesh Ewald (PME) correction [31]. The simulation temperature set to 300 K and a time step of 2 fs was applied. The simulations with explicit solvent or GB-Neck2 were carried out for 1000 ns. Simulations with the GNN to describe the solvent were performed using the OpenMM-Torch package (version 0.6, url: [https://github.com/openmm/openmm-torch](https://github.com/openmm/openmm-torch)) by introducing the GNN as an additional force to the vacuum force field. These simulations were carried out for 30 ns. Note that explicit-solvent simulations with TIP3P were only performed for peptides KAPAE, KASAE, KATAE, and KAYAE.
### Generation of the Training Set
From the explicit-solvent simulations with TIP5P, a conformer was extracted every 200 ps for the peptides with X being apolar (i.e., KAVAE, KALAE, KAIAE, KAFAE, and KAPAE) and every 100 ps for the peptides with X being polar (i.e., KASAE, KATAE, and KAYAE). To calculate the mean solvation forces for each conformer, the solute atoms were positionally constrained and an explicit-solvent simulation was performed for 200 ps. The solvent-solute forces were evaluated every 200 fs by calculating the forces for each solute atom within the explicit simulation and subtracting the forces of the same solute atom in vacuum.
### Graph Neural Networks
Two GNN architectures were explored, sharing a three-pass neural network as core architecture. Both GNNs were applied in the subsequent simulations via a \(\Delta\)-learning scheme [18]. In the first case (abbreviated as GNN+), the energies of the base model (i.e., GB-Neck2) and the GNN were summed up following a traditional \(\Delta\)-learning approach (the forces were then obtained via standard derivation). In the second case (abbreviated as GNN*), the functional form of the base model (GB-Neck2) was adjusted such that the Born radii are scaled by the GNN within a predefined range according to Eq. 4,
\[R_{i}^{\prime}=R_{i}*(p+S(\phi(R,q,r_{a},r,R_{\text{cutoff}}))\cdot(1-p)\cdot 2) \tag{4}\]
where \(R_{i}\) is the Born radius calculated based on the Neck integral, \(p\) the scaling parameter to adjust the strength of the scaling of the Born radii (i.e., \(1=\text{no scaling applied}\); \(0=\text{maximum scaling applied}\)),
\(S\) the sigmoid function, and \(\phi\) the function approximated by the GNN based on the Born radii \(R\), the charges \(q\), the atomic radii \(r_{a}\) of all atoms, the distances \(r\) between all atoms, and a cutoff radius \(R_{\text{cutoff}}\).
The GNN* and GNN+ networks share key architectural elements as both employ three passes through interaction networks that only differ in the shape of the input and output, followed by SiLU activations [32]. One interaction network pass is characterised by a concatenation of the node features of the sending and receiving node together with the distance encoded by a radial Bessel [33] function of length 20, followed by a multi-layer perceptrons (MLP) with two layers and SiLU activation functions. As an aggregation method, the sum of all messages was taken and the node-wise computation employed again a two-layer MLP with SiLU activation functions. All hidden layers have a size of 128. As an embedding for the atoms, the GNN+ model used the partial charges and atomic radii from the force field and GBNeck2, respectively, while the GNN* incorporates additionally the calculated Born radius from the Neck integral. A schematic representation for the GNN architectures is shown in Figure 1.
The GNNs were trained on randomly selected 80 % of the conformations, using the Adam optimiser [34] for 100 epochs with a batch size of 32 and an exponentially decaying learning rate starting at 0.001 and decaying by two orders of magnitude to 0.00001. The mean squared error (MSE) was chosen as the loss function and the samples randomly shuffled after each epoch.
### Training-Test Splits
To assess the generalisability and transferability of the GNN, we composed different training and test splits (Table 1). Splits 1-4 include all KAXAE peptides for training but one. In split 5, all KAXAE peptides with X = polar are used for training, while the ones with X = apolar are simulated prospectively. Split 6 is the other way around (i.e., training on X = apolar, testing on X = polar). Finally, in split 7, all
Figure 1: Schematic representation of the GNN architectures. Node-wise computations are shown in blue, while message operations are shown in orange. (**A**): Schematic representation of the computation through the GNN. (**B**): Message computations start by concatenating the atom features of the sending and receiving node with the distance encoded by a Bessel function (RBF), followed by a two-layer MLP with SiLU activation functions. (**C**): Node-wise computations of aggregation by summation followed by a two-layer MLP with SiLU activation functions.
peptides with five residues (except X = A) were used for training, while prospective simulations were performed for KAAE, KAAAE, and KAAAAAE.
### Data Analysis
We used MDTraj (version 1.9.7) [35] for the analysis of the trajectories. To estimate the statistical uncertainty of the different approaches, the explicit-solvent simulations were divided into five 200 ns blocks and analysed separately. In addition, the training and simulation of the GNN was repeated three times with different random seeds. All free-energy profiles were calculated with a Jakobian correction factor of \(4\pi r^{2}\)[36]. A key feature that the ML-based implicit solvent should reproduce is the correct representation of the salt bridge between LYS N\({}_{\zeta}\) and GLU C\({}_{\delta}\). We have identified three main characteristics concerning the salt bridge for all test systems (Figure 2): (i) a double well in the free-energy minimum at 0.33 nm and 0.37 nm with different weights, corresponding to two different closed salt-bridge geometries, (ii) the height of the energy barrier the opening of the salt bridge at 0.43 nm, and (iii) a dip in the free-energy profile at 0.55 nm, corresponding to conformations with one water molecule between LYS N\({}_{\zeta}\) and GLU C\({}_{\delta}\) in a hydrogen-bond network. In addition to the salt bridge, the backbone dihedral angles \(\phi\) and \(\psi\) of the central amino acid are monitored. For polar central residues, the distance between the oxygen of the hydroxy group of the polar side chains and the LYS N\({}_{\zeta}\) and GLU C\({}_{\delta}\) (Figure 2).
\begin{table}
\begin{tabular}{l|c c c c c c c c c c} \hline Split & V & L & I & F & P & S & T & Y & ” & A & AA \\ \hline \hline
1 & & & & & & & & & & & \\ \hline
2 & & & & & & & & & & & \\ \hline
3 & & & & & & & & & & & \\ \hline
4 & & & & & & & & & & & \\ \hline
5 & & & & & & & & & & & \\ \hline
6 & & & & & & & & & & & \\ \hline
7 & & & & & & & & & & & \\ \hline \end{tabular}
\end{table}
Table 1: Training and test splits of peptides KAXAE with X being V, L, I, F, P, S, T, Y, ”, A, AA (note that KAXAE with X = ” corresponds to KAAE). Peptides used for training are marked with, while peptides simulated prospectively (testing) are marked with.
Figure 2: Structural characteristics of KASAE as example (light blue) with highlighted salt bridge, the distance SER \(O_{\gamma}\) – LYS N\({}_{\zeta}\) and SER \(O_{\gamma}\) – GLU C\({}_{\delta}\), and backbone torsional angles \(\phi\) and \(\psi\) of the central residue.
Results and Discussion
### Comparison of GNN Architectures
We explored two GNN architectures with \(\Delta\)-learning schemes for the ML-based implicit solvent: GNN\(+\) and GNN*. To compare the architectures and evaluate their hyperparameters, we calculated the solvation forces of simulation snapshots in a retrospective manner. We considered three peptides (KAVAE, KALAE, and KASAE) for training (using a random 20 % subset as validation set during the GNN training process), and two peptides KATAE and KAFAE as external test set. The peptides KAIAE, KAPAE and KAYAE were not included in this benchmark in order to not bias the choice of model architecture for the subsequent simulation studies. The investigated hyperparameters were the cutoff radius \(R_{\text{cutoff}}\) for which the fully connected graph is constructed, as well as the scaling parameter \(p\) of the GNN*, which regulates how much the Born radii of the GB-Neck2 model are allowed to change. The results of the benchmarking are summarised in Figure 3.
Interestingly, the GNN* model was found to perform significantly better than the GNN+ model in predicting the forces of the external test set over the entire range of tested cutoff radii, reaching an RMSE of \(16.2(3)\,\mathrm{k}\mathrm{J}\,\mathrm{m}\mathrm{l}^{-1}\,\mathrm{n}\mathrm{m} ^{-1}\). In addition, smaller deviations between the different random seeds were observed with the GNN* (indicated by lower standard deviations), which is a desirable feature. The effect of the scaling parameter on the GNN* models is more subtle. Values of 0.1, 0.2, and 0.4 gave essentially the same results, while the error increases slightly for a scaling parameter of 0.8. The impact of the cutoff radius \(R_{\text{cutoff}}\) is also small, although \(R_{\text{cutoff}}\) = 0.4 nm is likely too short. As longer radii (i.e. 0.7 nm) did not improve the performance significantly but increase the computational cost, we decided to focus in the following on the GNN* architecture with cutoff radii of 0.5 nm and 0.6 nm together with a scaling parameter of 0.1 and 0.2.
### Prospective Molecular Dynamics Simulations
To investigate the ability of the ML-based implicit solvent to simulate novel peptides, we composed different training and test splits (Table 1). First, we assessed the simulation performance of the GNN* models with with radii \(=0.5\,\mathrm{n}\mathrm{m}\) or 0.6 nm and scaling parameter \(=0.1\) or 0.2 on the training/test splits
Figure 3: Comparison of the root-mean-square error (RMSE) of the forces predicted by the GNN* and GNN\(+\) models with \(\Delta\)-learning for the external test set. The GNN* models with scaling parameter \(p\) of 0.1, 0.2, 0.4, or 0.8 are shown as colored dots (orange, purple, light blue, and navy, respectively), while the GNN\(+\) is shown as black squares. Statistical uncertainty is denoted by error bars.
1, 2, and 3 (i.e., training on all peptides except KASAE, KATAE, or KAYAE, respectively, and prospective simulation of the leftout peptide). The GNN* model with a radius of 0.6 nm and a scaling parameter of 0.1 yielded the most stable simulation results as indicated by the smallest deviations between the different random seeds for the closed salt-bridge conformations (see Figures S1-S4 in the Supporting Information). The observation that models with similar performance on a retrospective test set show more different behaviour in prospective simulations is in line with findings by Fu _et al._[37] and Stocker _et al._[38] Based on these results, the following analyses were performed using only the GNN* with a radius of 0.6 nm and a scaling parameter of 0.1.
#### 3.2.1 KASAE, KATAE, or KAYAE as Test Peptide
The training/test splits 1, 2, and 3 (Table 1) are particularly interesting as they challenge the model the most. Among the three, the simulation of KASAE (split 1) and KATAE (split 2) are expected to be easier for the model as there is a similar residue in the training set (THR versus SER). Free-energy profiles of the salt bridge, the \(O_{\gamma}\) - LVS \(\mathrm{N_{\zeta}}\) distance, and \(O_{\gamma}\) - GLU \(\mathrm{C_{\delta}}\) distance are shown in Figure 4.
The GNN* implicit solvent was able to correctly reproduce all desired properties of the salt bridge in TIP5P explicit water, featuring the correct double well in the free-energy profile at short distances, the correct energy barrier of opening for the salt bridge, and a local minima at 0.55 nm (Figure 4A,D). The GB-Neck2 model, on the other hand, failed to describe any of these features. Note that it has been shown by Nguyen _et al._[22] that the GB-Neck2 model can be tweaked to reproduce smaller barrier heights for salt-bridge opening, but the other features have not been reported for that model. Larger deviations can be observed for the SER/THR O\({}_{\gamma}\) LVS \(\mathrm{N_{\zeta}}\) distance (Figure 4B,E). Again, the GB-Neck2 model is not able to capture the key characteristics of the TIP5P free-energy surface, while the GNN* reproduces the minima correctly and shows also good agreement with the local minima of the direct hydrogen bond between O\({}_{\gamma}\) and \(\mathrm{N_{\zeta}}\) at 0.29 nm. Interestingly, the TIP3P model shows quite different characteristics compared to TIP5P. The minima of the direct hydrogen bond is lower than the second minima at 0.43 nm. The latter minima contains conformations where an explicit water molecule forms a hydrogen bond network between the salt-bridge partners. Analysing similar conformations in the TIP5P solvent simulation, we could identify examples that benefit from the specific geometry of the TIP5P water representation (Figure 5). We hypothesise that the 'triangle' between the SER O\({}_{\gamma}\), the LVS \(\mathrm{N_{\zeta}}\), and a carbonyl of the peptide backbone allows for a trident 'binding' site for the explicit TIP5P water molecules, thus stabilising these conformations, which is not possible with the simplified TIP3P model.
A more challenging case is KAYAE (split 3) as none of amino acids in the training set is very similar to TYR. Thus, the model needs to learn about the solvation of the TYR side chain from different amino acids.
Figure 4: Comparison of the GNN* implicit solvent model (orange) with explicit TIP5P (navy blue) and TIP3P (light blue) as well as the GB-Neck2 implicit solvent (purple). Results for KASAE (split 1) are shown in the top row, results for KATAE (split 2) in the bottom row. **(A, D)**: Free-energy profile of the salt bridge. **(B, E)**: Distance \(O_{\gamma}\) - LYS \(N_{\zeta}\). **(C, F)**: Distance \(O_{\gamma}\) - GLU \(C_{\delta}\). The shaded area indicates the statistical uncertainty of the corresponding solvent model (not shown for GB-Neck2 for clarity).
Figure 5: Example conformation of KASAE (light blue) with a SER O\({}_{\gamma}\) - LYS N\({}_{\zeta}\) distance of 0.43 nm. The TIP5P water molecule interacting with the SER O\({}_{\gamma}\), the LYS N\({}_{\zeta}\), and a carbonyl of the backbone is shown with its off-site charges displayed in dark red.
acids, i.e., the learning task becomes to generalise from PHE and SER/THR to a combination of the two. If the model achieves a good accuracy for split 3, it demonstrates its transferability to peptides with increasing structural differences to the training set. The free-energy profile of the KAYAE salt bridge, and the distances TYR \(O_{\eta}\) - LYS \(\mathrm{N_{\zeta}}\) and the TYR \(O_{\eta}\) - GLU \(\mathrm{C_{\delta}}\) are shown in Figure 6.
Again, the GNN* model reproduces the salt-bridge free-energy profile of TIP5P very well (Figure 6A). Interestingly, the TIP3P solvent shows in this case a different behaviour than TIP5P at short distances. The double well of the salt bridge is significantly different and the barrier height (at 0.43 nm) is lower for TIP3P. For the distance TYR \(O_{\eta}\) - LYS \(\mathrm{N_{\zeta}}\), the TIP5P and GNN* show the same behaviour, while the energy barrier with TIP3P is of approximately 3 kJ mol\({}^{-1}\) lower (Figure 6B). For the distance TYR \(O_{\eta}\) - GLU \(\mathrm{C_{\delta}}\), the minimum with TIP3P and GNN* is the direct H-bond distance at 0.37 nm, while the minimum with TIP5P is at a distance of 0.51 nm, where one explicit water interacts in a H-bond network between the salt-bridge partners (Figure 6C).
While the free-energy profiles of the salt bridge were similar for TIP5P and TIP3P for KASAE and KATAE, they differed for KAYAE. Therefore, we investigated this observation further. The difference in barrier height for opening the salt bridge (Figure 6A) could be due to different preferences for the distance TYR \(O_{\eta}\) - LYS \(\mathrm{N_{\zeta}}\). In Figure 7, the free-energy profile of the salt bridge for KASAE, KATAE, and KAYAE was calculated once for conformations with the distance between \(O\) of the hydroxy group and LYS \(\mathrm{N_{\zeta}}<0.6\) nm and once for those with this distance \(>0.6\) nm. Intriguingly, the energy barrier to open the salt bridge of KAYAE is higher when TYR \(O_{\eta}\) interacts with the LYS \(\mathrm{N_{\zeta}}\) (e.g., either via a direct hydrogen bond or mediated by a water molecule) in both TIP5P and TIP3P water. For KASAE and KATAE, this is not the case. From these findings, it follows that because TIP5P and GNN* favour conformations of KAYAE with a distance TYR \(O_{\eta}\) - LYS \(\mathrm{N_{\zeta}}<0.6\) nm more than TIP3P (Figure 6B), the energy barrier of the salt bridge is higher than in TIP3P.
Figure 6: Comparison of the GNN* implicit solvent model (orange) with explicit TIP5P (navy blue) and TIP3P (light blue) as well as the GB-Neck2 implicit solvent (purple). Results for KAYAE (split 3) are shown. (**A**): Free-energy profile of the salt bridge. (**B**): Distance TYR \(O_{\eta}\) - LYS \(N_{\zeta}\). (**C**): Distance TYR \(O_{\eta}\) - GLU \(C_{\delta}\). The shaded area indicates the statistical uncertainty of the corresponding solvent model (not shown for GB-Neck2 for clarity).
#### 3.2.2 Proline as Special Case: KAPAE
An interesting case is PRO as central residue (split 4 in Table 1). Proline has a different Ramachandran plot than the other amino acids and is therefore possibly challenging to describe correctly by the GNN* approach. As can be seen in Figure 8A, the free-energy profile of the salt bridge is reproduced well, although deviations from TIP5P occur at long distances. Interestingly, at these long distances also TIP3P and TIP5P disagree with each other. While TIP3P yields lower free energies for the open state compared to TIP5P, the GNN* predicts even higher free energies. As PRO is in the middle of the peptide, the sampling of its backbone dihedral angles will influence the long salt-bridge distances. The Ramachandran plots for PRO are shown in Figure 8B-D for TIP3P, GNN*, and TIP5P. The main difference is in the population of the polyproline II state, which is overestimated with TIP3P and underestimated with GNN* compared to TIP5P (see also Figure S5 in the Supporting Information). It is therefore important to include PRO in the training set to ensure proper sampling of its backbone conformations.
Figure 7: Free-energy profile of the salt-bridge distance for KASAE (top), KATAE (middle), and KAYAE (bottom) in the explicit-solvent simulations with TIP5P (left (**A**, **C**, **E**) navy blue) and TIP3P (right (**B**, **D**, **F**) light blue). The dotted line corresponds to conformations with the distance between \(O\) of the hydroxy group and LYS \(N_{\zeta}<0.60\,\mathrm{nm}\) and the solid line to conformations with the distance between \(O\) of the hydroxy group and LYS \(N_{\zeta}>0.60\,\mathrm{nm}\).
#### 3.2.3 Apolar Versus Polar Residues
The results above demonstrated that the GNN* model is able of reproduce key characteristics of explicit-water simulations of peptides different from the training set. Next, we investigated how different training set compositions influenced the simulation performance of the model. We therefore created two training sets (splits 5 and 6 in Table 1) by dividing all peptides into two excluding subsets: (1) central residue has a polar side chain (i.e., KASAE, KATAE, and KAYAE), and (2) central residue has a apolar side chain (i.e., KAVAE, KAIAE, KALAE, KAFAE, and KAPAE). In split 5, training was carried out with the first group and prospective simulations were performed for the second group. As can be seen in Figure 9 for KAIAE, the generalization from the peptides with a polar central residue to those with an apolar one is good. For all tested peptides, the GNN* model is able to reproduce the free-energy profile of the salt bridge, the double well minima, the height of the energy barrier for opening, and the first dip of the reference TIP5P simulation. The corresponding results for KAVAE, KALAE, KAFAE, and KAPAE are shown in Figures S6-S9 in the Supporting Information.
To probe the conformational sampling of the central residue in more detail, we compared its Ramachandran plot for the different solvent descriptions. With the exception of the L\({}_{\alpha}\) state, the Ramachandran plots with GNN* and TIP5P agree well. Note that the transition into the L\({}_{\alpha}\) state is a rare event. In the TIP5P reference simulations of KAIAE, this state is sampled in only one of the five 200 ns blocks (see Figures S10-S14 in the Supporting Information). The differences in the population of
Figure 8: Comparison of the GNN* implicit solvent model (orange) with explicit TIP5P (navy blue) and TIP3P (light blue) as well as the GB-Neck2 implicit solvent (purple). Results for KAPAE (split 4) are shown. (**A**): Free-energy profile of the salt bridge. The shaded area indicates the statistical uncertainty of the corresponding solvent model (not shown for the GB-Neck2 implicit solvent for clarity). (**B**): Ramachandran plot of PRO with GB-Neck2. (**C**): Ramachandran plot of PRO with GNN*. (**D**): Ramachandran plot of PRO with TIP5P. The polyproline II state is highlighted in the Ramachandran plots by a dashed black line.
the L\({}_{\alpha}\) state between GNN* and TIP5P may therefore stem from finite sampling effects.
The inverse, i.e., training on peptides with an apolar central residue and testing on peptides with a polar central residue (split 6), is more challenging. The GNN* model was still superior to the GB-Neck2 implicit solvent in reproducing key characteristics of the TIP5P reference simulations, however, deviations were observed for the interactions of the polar central residue with the salt-bridge partners (i.e., distance between \(O\) of the hydroxy group and LYS N\({}_{\zeta}\) /GLU C\({}_{\delta}\) ) (see Figures S15-S17 in the Supporting Information). These results indicate that the extent to which the GNN* model can generalise from the training set is limited to similar functional groups. For instance, if no hydroxy group is present in the training set, the ability of the model to represent its interactions is limited. On the other hand, the TYR case demonstrates that the model is able to generalise from a hydroxy group in SER/THR to one in a different local environment. Taken together, these findings suggest that it is important for a generally applicable GNN* implicit-solvent approach to include all functional groups in the training set, but that the model does not have to have seen the complete molecule for good performance.
#### 3.2.4 Varying the Length of the Peptide
Finally, we investigated whether the GNN* model is able to generalise to larger or smaller peptides by removing the middle amino acid or instead inserting an extra ALA residue, i.e., peptides KAAE and KAAAAE (split 7 in Table 1). In addition, we included KAAAE (same size) in the test set for comparison. The resulting free-energy profiles and Ramachandran plots for KAAE, KAAAE, and KAAAAE are shown
Figure 9: Comparison of the GNN* implicit solvent model (orange) with explicit TIP5P (navy blue) and TIP3P (light blue) as well as the GB-Neck2 implicit solvent (purple). Results for KAIAE (split 5) are shown. (**A**): Free-energy profile of the salt bridge. The shaded area indicates the statistical uncertainty of the corresponding solvent model (not shown for the GB-Neck2 implicit solvent for clarity). (**B**): Ramachandran plot of ILE with GB-Neck2. (**C**): Ramachandran plot of ILE with GNN*. (**D**): Ramachandran plot of ILE with TIP5P. The L\({}_{\alpha}\) state is highlighted in the Ramachandran plots by a dashed black line.
in Figure 10. For all three peptide lengths, the GNN* is able to reproduce the free-energy profile of the salt bridge of the TIP5P reference simulation for short distances (i.e., \(<1\,\mathrm{nm}\)), including the double well, the height of the energy barrier, and the first dip. For the KAAE and KAAAE case also the long range behavior matches the TIP5P simulation to a high degree. Only for the KAAAAE, a deviation between GNN* and TIP5P is observed at longer distances (i.e., \(>1\,\mathrm{nm}\)), which could highlight a potential weak point of the GNN. While generalization to shorter peptides works well, longer peptides require either the inclusion in the training set or the introduction of a long-range correction in order to describe the elongated conformations accurately. As discussed above, differences in the population of the \(\mathrm{L}_{\alpha}\) state may likely be finite sampling effects.
#### 3.2.5 Timings
One major advantage of standard implicit solvent models is that they are much faster to compute than explicit solvent molecules. When employing GNNs for this task, the computational costs are currently still too high. Using a desktop PC with an Intel(R) Xeon(R) W-1270P CPU with a clock rate of 3.80GHz and a NVIDIA(R) Quadro(R) P2200 GPU, approximately 46 ns d\({}^{-1}\) of the peptide KASAE could be obtained with our proof-of-concept implementation of the GNN implicit solvent, whereas approximately 200 ns d\({}^{-1}\) were reached with explicit TIP5P simulations. Similar observations were made in Ref. [39] for classical force-field terms. The slower speed of GNNs represents a major challenge for its application to replace explicit solvent simulations. However, this is primarily a technical issue and not a fundamental
Figure 10: Comparison of the GNN* implicit solvent model (orange) with explicit TIP5P (navy blue) as well as the GB-Neck2 implicit solvent (purple). Results for KAAE (top), KAAAE (middle), and KAAAAE (bottom) are shown (split 7). (**A**, **D**, **G**): Free-energy profile of the salt bridge. The shaded area indicates the statistical uncertainty of the corresponding solvent model (not shown for the GB-Neck2 implicit solvent for clarity). (**B**, **E**, **H**): Combined Ramachandran plot of all ALA residues with TIP5P. (**C**, **F**, **I**): Combined Ramachandran plot of all ALA residues with GNN*. The \(\mathrm{L}_{\alpha}\) state is highlighted by a dashed black line.
limitation. While the TIP5P explicit simulation is highly optimised, our GNN implementation is not yet. Currently, the GNN is evaluated on the GPU while the classical forces are evaluated on the CPU leading to high communication cost and low utilisation of the GPU. Recently, two approaches have been reported to increase dramatically the speed of NN potentials. The first option is the optimisation of the operations of the GNN to better suite the application of MD simulations [40]. The second option involves batching of multiple simulations that run on one GPU in parallel [12]. Both approaches have been shown to bring the speed of NN potentials on par with their classical counterparts. In this work, we focused on providing a conceptual proof that developing an ML based transferable implicit solvent is possible. Improving the computational performance of the implementation is part of future work to develop a practically usable GNN implicit solvent.
## 4 Conclusion
In this work, we have developed a GNN-based implicit solvent that can be trained on a set of peptides and used to prospectively simulate different ones. The GNN* model is based on the GB-Neck2 implicit solvent with a \(\Delta\)-learning scheme. To validate our approach, we have chosen a traditionally hard problem for implicit-solvent models where the local effects of explicit water molecules play a key role: the free-energy profile of a salt bridge. Here, the salt bridge is formed by peptides with the composition KAXAE, where X can be varied. We could demonstrate that the GNN* implicit solvent was able to reproduce the key characteristics of the reference explicit-solvent simulations with TIP5P, matching or surpassing the accuracy of explicit-solvent simulations with the simpler TIP3P water model. With different training/test splits, we assessed the ability of the GNN* model to generalise to unseen amino acids and varying peptide length. Overall, we found that the model has a high transferability as long as all functional groups are represented in the training set. For instance, if an aliphatic hydroxy group (SER or THR) is in the training set, it is sufficient for the model to correctly describe the aromatic hydroxy group of TYR. These findings are encouraging as they suggest that the training set for a globally applicable ML-based implicit solvent model may not need to be extremely large but "only" contain all necessary functional groups. The results of this work present an important step towards the development of such a model, capable of replacing explicit-solvent simulations to an increasing degree.
## Data and Software Availability
The code used to generate the results of this study is available at [https://github.com/rinikerlab/GNNImplicitSolvent](https://github.com/rinikerlab/GNNImplicitSolvent). Topologies, starting structures, examples for the performed analysis, and the training data are provided at the ETH Research Collection ([https://doi.org/10.3929/ethz-b-000599309](https://doi.org/10.3929/ethz-b-000599309)). Complete trajectories are available from the corresponding author upon reasonable request.
## Acknowledgments
The authors gratefully acknowledge financial support by ETH Zurich (Research Grant no. ETH-50 21-1). The authors thank Moritz Thurlemann for helpful discussions.
|
2308.05226 | Training neural networks with end-to-end optical backpropagation | Optics is an exciting route for the next generation of computing hardware for
machine learning, promising several orders of magnitude enhancement in both
computational speed and energy efficiency. However, to reach the full capacity
of an optical neural network it is necessary that the computing not only for
the inference, but also for the training be implemented optically. The primary
algorithm for training a neural network is backpropagation, in which the
calculation is performed in the order opposite to the information flow for
inference. While straightforward in a digital computer, optical implementation
of backpropagation has so far remained elusive, particularly because of the
conflicting requirements for the optical element that implements the nonlinear
activation function. In this work, we address this challenge for the first time
with a surprisingly simple and generic scheme. Saturable absorbers are employed
for the role of the activation units, and the required properties are achieved
through a pump-probe process, in which the forward propagating signal acts as
the pump and backward as the probe. Our approach is adaptable to various analog
platforms, materials, and network structures, and it demonstrates the
possibility of constructing neural networks entirely reliant on analog optical
processes for both training and inference tasks. | James Spall, Xianxin Guo, A. I. Lvovsky | 2023-08-09T21:11:26Z | http://arxiv.org/abs/2308.05226v1 | # Training neural networks with end-to-end optical backpropagation
###### Abstract
Optics is an exciting route for the next generation of computing hardware for machine learning, promising several orders of magnitude enhancement in both computational speed and energy efficiency. However, to reach the full capacity of an optical neural network it is necessary that the computing not only for the inference, but also for the training be implemented optically. The primary algorithm for training a neural network is backpropagation, in which the calculation is performed in the order opposite to the information flow for inference. While straightforward in a digital computer, optical implementation of backpropagation has so far remained elusive, particularly because of the conflicting requirements for the optical element that implements the nonlinear activation function. In this work, we address this challenge for the first time with a surprisingly simple and generic scheme. Saturable absorbers are employed for the role of the activation units, and the required properties are achieved through a pump-probe process, in which the forward propagating signal acts as the pump and backward as the probe. Our approach is adaptable to various analog platforms, materials, and network structures, and it demonstrates the possibility of constructing neural networks entirely reliant on analog optical processes for both training and inference tasks.
## I Introduction
Machine learning, one of the most revolutionary scientific breakthroughs in the past decades, has completely transformed the technology landscape, enabling innovative applications in fields ranging from natural language processing to drug discovery. As the demand for increasingly sophisticated machine learning models continues to escalate, there is a pressing need for faster and more energy-efficient computing solutions. In this context, analog computing has emerged as a promising alternative to traditional digital electronics [1; 2; 3; 4; 5; 6; 7]. A particularly exciting platform for analog neural networks (NNs) is optics, in which the interference and diffraction of light during propagation implements the linear part of every computational layer [8; 9].
Most of the current analog computing research and development is aimed at using the NN for inference [8; 10]. Training such NNs, on the other hand, is a challenge. This is because the backpropagation algorithm [11], the workhorse of training in digital NNs, requires the calculation to be performed in the order opposite to the information flow for inference, which is difficult to implement on an analog physical platform. Hence analog models are typically trained offline (_in silico_), on a separate digital simulator, after which the parameters are transferred to the analog hardware. In addition to being slow and inefficient, this approach can lead to errors arising from imperfect simulation and systematic errors ('reality gap'). In optics, for example, these effects may result from dust, aberrations, spurious reflections and inaccurate calibration [12].
To enable learning in analog NNs, different approaches have been proposed and realized [13]. Several groups explored various 'hardware-in-the loop' schemes, in which, while the backpropagation was done _in silico_, the signal acquired from the analog NN operating in the inference regime was incorporated into the calculation of the feedback for optimizing the NN parameters [14; 15; 16; 17; 18; 19; 20]. This has partially reduced the training error, but has not addressed the low speed and inefficiency of _in silico_ training.
Recently, several optical neural networks (ONNs) were reported that were trained online (_in situ_) using methods alternative to backpropagation. Bandyopadhyay _et al._ trained an ONN based on integrated photonic circuits using simultaneous perturbation stochastic approximation, i.e. randomly perturbing all ONN parameters and using the observed change of the loss function to approximate its gradient [21]. Filipovich _et al._ applied direct feedback alignment, wherein the error calculated at the output of the ONN is used to update the parameters of all layers [22]. However both these methods are inferior to backpropagation as they take much longer to converge, especially for sufficiently deep ONNs [23].
An optical implementation of the backpropagation algorithm was proposed by Hughes _et al._[24], and recently demonstrated experimentally [25], showing that the training methods of current digital NNs can be applied to analog hardware. However, their scheme omitted a crucial step for optical implementation: backpropagation through nonlinear activation layers. Their method requires digital nonlinear activation and multiple optoelectronic inter-conversions inside the network, complicating the training process. Furthermore, the method applies only to a specific type of ONN that uses interferometer meshes for the linear layer, and does not generalise to other ONN architectures. Complete implementation of the backpropagation algorithm in optics, through
all the linear and nonlinear layers, that can generalise to many ONN systems, remains a highly challenging goal.
In this work, we address this long-standing challenge and present the first complete optical implementation of the backpropagation algorithm in a two-layer ONN. The gradients of the loss function with respect to the NN parameters are calculated by light travelling through the system in the reverse direction. The main difficulty of all-optical training lies in the requirement that the nonlinear optical element used for the activation function needs to exhibit different properties for the forward and backward propagating signals. Fortunately, as demonstrated in our earlier theoretical work [26] and explained below, there does exist a group of nonlinear phenomena, which exhibits the required set of properties with sufficient precision.
We optically train our ONNs to perform classification tasks, and our results surpass those trained with a conventional _in silico_ method. Our optical training scheme can be further generalized to other platforms using different linear layers and analog activation functions, making it an ideal tool for exploring the vast potential of analog computing for training neural networks.
## II Optical training algorithm
We consider a multilayer perceptron -- a common type of NN which consists of multiple linear layers that establish weighted connections between neurons, inter-laid by activation functions that enable the network to learn complex nonlinear functions. To train the NN, one presents it with a training set of labeled examples and iteratively adjusts the NN parameters (weights and biases) to find the correct mapping between the inputs and outputs.
The training steps are summarised in Fig. 1(d). The weight matrices, denoted \(W^{(i)}\) for the \(i\)-th layer, are first initialized with random values. Each iteration of training starts by entering the input examples from the training set as input vectors \(x=a^{(0)}\) into the NN, and forward propagating through all of its layers. In every layer \(i\), one performs a matrix-vector multiplication (MVM) of the weight matrix and the activation vector,
\[z^{(i)}=W^{(i)}\times a^{(i-1)}, \tag{1}\]
followed by element-wise application of the activation function \(g(\cdot)\) to the resulting vector:
\[a^{(i)}=g\left(z^{(i)}\right). \tag{2}\]
The output \(y=a^{(L)}\) of an \(L\)-layer NN allows one to compute the loss function \(\mathcal{L}(y,t)\) that determines the difference between the network predictions \(y\) and ground truth labels \(t\) from the training set. The backpropagation algorithm helps calculating the gradient of this loss function with respect to all the parameters in the network, through what is essentially an application of the chain rule of calculus. The network parameters are then updated using these gradients and optimization algorithms such as stochastic gradient descent. The training process is repeated until convergence.
The gradients we require are given by [11] as
\[\frac{\partial\mathcal{L}}{\partial W^{(i)}}=\delta^{(i)}\otimes a^{(i-1)}, \tag{3}\]
where \(\delta^{(i)}\) is referred to as the "error vector" at the \(i\)th layer and \(\otimes\) denotes the outer product. The error vector is calculated as
\[\delta^{(i-1)}=\left({W^{(i)}}^{T}\times\delta^{(i)}\right)g^{\prime}\left(z^{ (i-1)}\right), \tag{4}\]
going through layers in reverse sequence. The expression for the error vector \(\delta^{(L)}\) in the last layer depends on the choice of the loss function, but for the common loss functions of mean-squared error and cross-entropy (with an appropriate choice of activation function) it is simply the difference between the NN output and the label: \(\delta^{(L)}=y-t\).
Therefore, to calculate the gradients at each layer we need one vector from the forward pass through the network (the activations) and one vector from the backward pass (the errors).
We see from Eq. (4) that the error backpropagation consists of two operations. First we must perform an MVM, mirroring the feedforward linear operation (1). In an ONN, this can be done by light that propagates backwards through the same linear optical arrangement [27]. The second operation consists in modulation of the MVM output by the activation function derivative and poses a significant challenge for optical implementation. This is because most optical media exhibit similar properties for forward and backward propagation. On the other hand, our application requires an optical element that is (1) nonlinear in the forward direction, (2) linear in the backward direction and (3) modulates the backward light amplitude by the derivative of the forward activation function.
We have solved this challenge with our optical backpropagation protocol, which calculates the right-hand side of Eq. (4) entirely optically with no opto-electronic conversion or digital processing. The first component of our solution is the observation that many optical media exhibit nonlinear properties for strong optical fields, but are approximately linear for weak fields. Hence, we can satisfy the conditions (1) and (2) by maintaining the back-injected beam at a much lower intensity level than the forward. Furthermore, there exists a set of nonlinear phenomena that also addresses the requirement (3). An example is saturable absorption (SA). The transmissivity of SA medium in the backward direction turns out to approximate the derivative of its intensity-dependent transmission in the forward direction \(g^{\prime}\left(z^{(i-1)}\right)\). This approximation is valid up to a certain numerical factor and only for small values of \(z^{(i-1)}\); however, as shown in
our prior work [26], this is sufficient for successful training.
## III Multilayer Onn
Our ONN as shown in Fig. 1(a,b) is implemented in a free-space tabletop setting. The neuron values are encoded in the transverse spatial structure of the propagating light field amplitude. Spatial light modulators are used to encode the input vectors and weight matrices. The NN consists of two fully-connected linear layers implemented with optical MVM following our previously demonstrated experimental design [28].
This design has a few characteristics that make it suitable for use in a deep neural network. First, it is re
Figure 1: **Illustration of optical training.****(a)** Network architecture of the ONN used in this work, which consists of two fully-connected linear layers and a hidden layer. **(b)** Simplified experimental schematic of the ONN. Each linear layer performs optical MVM with a cylindrical lens and a spatial light modulator (SLM) that encodes the weight matrix. Hidden layer activations are computed using SA in an atomic vapour cell. Light propagates in both directions during optical training. **(c)** Working principle of SA activation. The forward beam (pump) is shown by solid red arrows, backward (probe) by purple wavy arrows. The probe transmission depends on the strength of the pump and approximates the gradient of the SA function. For high forward intensity (top panel), a large portion of the atoms are excited to the upper level. Stimulated emission produced by these atoms largely compensates the absorption due to the atoms in the ground level. For weak pump (bottom panel), the excited level population is low and the absorption is significant. **(d)** Neural network training procedure. **(e)** Optical training procedure. Both signal and error propagation in the two directions are fully implemented optically. Loss function calculation and parameter update are left for electronics without interrupting the optical information flow.
configurable, so that both neuron values and network weights can be arbitrarily changed. Second, multiple MVM blocks can be cascaded to form a multilayer network, as the output of one MVM naturally forms the input of the next MVM. Using a coherent beam also allows us to encode both positive- and negative-valued weights. Finally, the MVM works in both directions, meaning the inputs and outputs are reversible, which is critical for the implementation of our optical backpropagation algorithm. The hidden layer activation between the two layers is implemented optically by means of SA in a rubidium atomic vapour cell [Fig. 1(c)].
## Results
### Linear layers
We first set up the linear layers that serve as the backbone of our ONN, and we make sure that they work accurately and simultaneously in both directions -- a highly challenging task that has never been achieved before to our best knowledge.
This involves three MVMs: first layer in the forward direction (MVM-1), second layer in both forward (MVM-2a) and backward (MVM-2b) directions. To characterise these MVMs, we apply random vectors and matrices and simultaneously measure the output of all three: the results for 300 random MVMs are presented in Fig. 2(a). To quantify the MVM performance, we define the signal-to-noise ratio (SNR, see Methods for details). As illustrated by the histograms, MVM-1 has the greatest SNR of 14.9, and MVM-2a has a lower SNR of 7.1, as a result of noise accumulation from both layers and the reduced signal range. MVM-2b has a slightly lower SNR of 6.7, because the optical system is optimized for the forward direction. Comparing these experimental results with a simple numerical model, we estimate 1.3% multiplicative noise in our MVMs, which is small enough not to degrade the ONN performance [12].
### Nonlinearity
With the linear layers fully characterized, we now measure the response of the activation units in both directions. With the vapor cell placed in the setup and the laser tuned to resonance with the atomic transition, we pass the output of MVM-1 through the vapor cell in the forward direction. The response as presented in Fig. 2(b) shows strong nonlinearity. We fit the data with the theoretically expected SA transmissivity (see Supplementary for details), thereby finding the optical depth to be \(\alpha_{0}=7.3\), which is sufficient to achieve high accuracy in ONNs [26]. The optical depth and the associated nonlinearity can be easily tuned to fit different network requirements by controlling the temperature of the vapor cell. In the backward direction, we pass weak probe beams through the vapor cell and measure the output. Both the forward and backward beams are simultaneously present in the vapor cell during the measurement.
In Fig. 2(c) we measure the effect of the forward amplitude \(z^{(1)}\) on the transmission of the backward beam through the SA. The theoretical fit for these data -- the expected backward transmissivity calculated from the physical properties of SA -- is shown by the red curve. For comparison, the orange curve shows the rescaled exact derivative \(g^{\prime}\left(z^{(1)}\right)\) of the SA function, which is the dependence required for the calculation (4) of the training signal. Although the two curves are not identical, they both match the experimental data for a broad range of neuron values generated from the random MVM, hence the setting is appropriate for training.
### All optical classification
After setting up the two-layer ONN, we perform end-to-end optical training and inference on classification tasks: distinguishing two classes of data points on a two-dimensional plane (Fig. 3). We implement a fully-connected feed-forward architecture, with three input neurons, five hidden layer neurons and two output neurons (Fig. 1). Two input neurons are used to encode the input data point coordinates \((x_{1},x_{2})\), and the third input neuron of constant value is used to set the first layer bias. The class label is encoded by a 'one-hot' vector \((0,1)\) or \((1,0)\), and we use categorical cross-entropy as the loss function.
We optically train the ONN on three 400-element datasets with different nonlinear boundary shapes, which we refer to as 'rings', 'XOR' and 'arches' [Fig. 3(a)]. Another 200 similar elements of each set are used for validation, i.e. to measure the loss and accuracy after each epoch of training. The test set consists of a uniform grid of equally-spaced \((x_{1},x_{2})\) values. The optical inference results for the test set are displayed in Fig. 3(a) by light purple and orange backgrounds, whereas the blue circles and orange triangles show the training set elements.
For all three datasets, each epoch consists of 20 minibatches, with a mini-batch size of 20, and we use the Adam optimizer to update the weights and biases from the gradients. We tune hyperparamters such as learning rate and number of epochs to maximise network performance. Table 1 summarises the network architecture and hyperparameters used for each dataset.
Figure 3(b) shows the optical training performance on the 'rings' dataset. We perform five repeated training runs, and plot the loss and accuracy for the validation set after each epoch of training. To visualise how the network is learning the boundary between the two classes, we also run a test dataset after each epoch. Examples of the network output after 1, 3, 6 and 10 epochs are shown. We see that the ONN quickly learns the nonlinear boundary and gradually improves the accuracy to 100%. This indicates a strong optical nonlinearity in the system and
a good gradient approximation in optical backpropagation. Details of the training procedure are provided in the following section.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline Dataset & \begin{tabular}{c} Input \\ neurons \\ \end{tabular} & \begin{tabular}{c} Hidden \\ neurons \\ \end{tabular} & \begin{tabular}{c} Output \\ neurons \\ \end{tabular} & \begin{tabular}{c} Learning \\ rate \\ \end{tabular} & Epochs &
\begin{tabular}{c} Batches \\ per epoch \\ \end{tabular} & Batch size \\ \hline Rings & & & & & 0.01 & 16 & & \\ \hline XOR & 2 & 5 & 2 & 0.005 & 30 & 20 & 20 \\ \hline Arches & & & & 0.01 & 25 & & \\ \hline \end{tabular}
\end{table}
Table 1: **Summary of network architecture and hyperparameters used in optical and digital training.**
Figure 2: **Multi-layer ONN characterisation.****(a)** Scatter plots of measured against theory results for MVM-1 (first layer forwards), MVM-2a (second layer forwards) and MVM-2b (second layer backwards). All three MVM results are taken simultaneously. Histograms of the signal and noise error for each MVM are displayed underneath. **(b)** First-layer activations \(a_{\text{meas}}^{(1)}\) measured after the vapor cell, plotted against the theoretically expected linear MVM-1 output \(z_{\text{theory}}^{(1)}\) before the cell. The green line is a best fit curve of the theoretical SA nonlinear function. **(c)** The amplitude of a weak constant probe passed backwards through the vapor cell as a function of the pump \(z_{\text{theory}}^{(1)}\), with constant input probe. Measurements for both forward and backward beams are taken simultaneously.
Methods section, and results for the other two datasets in Supplementary Note 3.
To better understand the optical training process, we explore the evolution of the output neuron and error vector values in Fig. 3(c). First, we plot the mini-batch mean value of each output neuron, \(\bar{z}_{j}^{(2)}\), for the inputs from the two classes separately in the upper and lower panels, over the course of the training iterations. We see the output neuron values diverge in opposite ways for the two classes, such that the inputs can be distinguished and correctly classified.
Second, we similarly plot the evolution of the mini-batch mean output error, \(\bar{\delta}^{(2)}\), for each neuron. This is calculated as the difference between the network output vector \(a^{(2)}\) and the ground truth label \(y\), averaged over each mini-batch. As expected, we see the output errors converge towards zero as the system learns the correct boundary.
### Optical training vs _in-silico_ training
To demonstrate the optical training advantage, we perform _in-silico_ training of our ONN as a comparison. We digitally model our system with a neural network of the equivalent architecture, including identical learning rate, number of epochs and all other hyperparamters. The hidden layer nonlinearity and the associated gradient are given by the best fit curve and theoretical probe response of Fig. 2(b). The trained weights are subsequently used for inference with our ONN. The top and bottom rows in Fig. 3(a) plot the network output of the test boundary set, after the system has been trained optically and digitally, respectively, for all three datasets. In all cases, the optically trained network achieves almost perfect accuracy, whilst the digitally trained network is clearly not optimised, with the network prediction not matching the data. This is further evidence for the already well
Figure 3: **Optical training performance.****(a)** Decision boundary charts of the ONN inference output for three different classification tasks, after the ONN has been trained optically (top) or _in-silico_ (bottom). **(b)** Learning curves of the ONN for classification of the ‘rings’ dataset, showing mean and standard deviation of the validation loss and accuracy averaged over 5 repeated training runs. Shown above are decision boundary charts of the ONN output for the test set, after different epochs. **(c)** Evolution of output neuron values, and of output errors, for the training set inputs of the two classes.
documented advantages of hardware-in-the-loop training schemes.
## Discussion
Our optical training scheme is surprisingly simple and effective. It adds minimal computational overhead to the network, since it doesn't require _a priori_ simulation, or intricate mapping of network parameters to physical device settings. It also imposes minimal hardware complexity on the system, as it requires only a few additional beam splitters and detectors to measure the activation and error values for parameter updates.
Our scheme can be generalised and applied to many other analog neural networks with different physical implementations of the linear and nonlinear layers. We list a few examples in Table. 2. Common optical linear operations include MVM, diffraction and convolution. Compatible optical MVM examples include our free-space multiplier and photonic crossbar array [29], as they are both bidirectional, in the sense that optical field propagating backwards through these arrangements gets multiplied by the transpose of the weight matrix. Diffraction naturally works in both directions, hence diffractive neural networks constructed using different programmable amplitude and phase masks also satisfy the requirements [30]. Optical convolution, achieved with the Fourier transform by means of a lens, and mean pooling, achieved through an optical low pass filter, also work in both directions. Therefore, a convolutional neural network can be optically trained as well. Detailed analysis on the generalization to these linear layers can be found in Supplementary Note 4.
Regarding the generalization to other nonlinearity choices, the critical requirement is the ability to acquire gradients during backpropagation. Our pump-probe method is compatible with multiple types of optical nonlinearities: saturable absorption, saturable gain and intensity-dependent phase modulation [31]. Using saturable gain as the nonlinearity offers the added advantage of loss compensation in a deep network, and using intensity-dependent phase modulation nonlinearity, such as self-lensing, allows one to build complex-valued optical neural networks with potentially stronger learning capabilities [32; 26; 33].
In our ONN training implementation, some computational operations remain digital, specifically the calculation of the last layer error \(\delta^{(2)}\) and the outer product (3) between the activation and error vectors. Both these operations can be done optically if needed [26]. The error vector can in many cases be obtained by subtracting the ONN output from the label vector by way of destructive interference [12]. Interference can also be utilized to compute the outer product by fanning out the two vectors and overlapping them in a criss-cross fashion onto a pixelated photosensor.
Our optically trained ONN can be scaled up to improve computing performance. Previously, using a similar experimental setup, we have demonstrated an ONN with 100 neurons per layer and high multiplier accuracy [12], and 1000 neurons can be supported by commercial SLMs or liquid crystal displays. Optical encoders and detectors can work at speeds up to 100 GHz using off-the-shelf components, enabling ultra-fast end-to-end optical computing at low latency. Therefore computational speeds up to \(10^{17}\) operations per second are within reach, and our optical training method is compatible with this productivity rate.
## Methods
### Multilayer ONN
To construct the multi-layer ONN, we connect two optical multipliers in series. For the first layer (MVM-1), the input neuron vector \(x\) is encoded into the field amplitude of a coherent beam using a digital micromirror device (DMD), DMD-1. This is a binary amplitude modulator, and every physical pixel is a micromirror that can reflect at two angles representing 0 or 1. By grouping 128 physical pixels as a block, we are able to represent 7-bit positive-valued inputs on DMD-1, with the input value proportional to the number of binary pixels 'turned on' in each block.
Since MVM requires performing dot products of the input vector with every row of the matrix, we create multiple copies of the input vector on DMD-1, and image them onto the \(W^{(1)}\) matrix mask -- a phase-only liquid-crystal spatial light modulator (LC-SLM), SLM-1 -- for element-wise multiplication. The MVM-1 result \(z^{(1)}\) is obtained by summing the element-wise products using a cylindrical lens (first optical 'fan-in'), and passing the beam through a narrow adjustable slit to select the zero spatial frequency component. The weights in our ONN are real-valued, encoded by LC-SLMs with 8-bit resolution using a phase grating modulation method that enables arbitrary and accurate field control [34].
The beam next passes through a rubidium vapor cell to apply the activation function, such that immediately after the cell the beam encodes the hidden layer activation vector, \(a^{(1)}\). The beam continues to propagate and becomes the input for the second linear layer. Another cylindrical lens is used to expand the beam (first optical 'fan-out'), before modulation by the second weight matrix mask SLM-2. Finally, summation by a third cylindrical lens (second optical 'fan-in') completes the second MVM in the forward direction (MVM-2a), and the final beam profile encodes \(z^{(2)}\).
To read out the activation vectors required for the optical training, we insert beam splitters at the output of each MVM to tap-off a small portion of the beam. The real-valued vectors are measured by high-speed cameras, using coherent detection techniques detailed in Supplementary Note 2.
At the output layer of the ONN we use a digital softmax function to convert the output values into probabilities, and calculate the loss function and output error vector, which initiates the optical backpropagation.
### Optical backpropagation
The output error vector, \(\delta^{(2)}\) is encoded in the backward beam by using DMD-2 to modulate a beam obtained from the same laser as the forward propagating beam. The backward beam is introduced to the system through one of the arms of the beam splitter placed at the output of MVM-2a, and carefully aligned so as to overlap with the forward beam. SLM-2 performs element-wise multiplication by the transpose of the second weight matrix.
The cylindrical lens that performs 'fan-out' for the forward beam, performs 'fan-in' for the backward beam into a slit, completing the second layer backwards MVM (MVM-2b). Passing through the vapor cell modulates the beam by the derivative of the activation function, after which the beam encodes the hidden layer error vector \(\delta^{(1)}\). Another beam splitter and camera are used to tap off the backward beam and measure the result.
In practice, two halves of the same DMD act as DMD-1 and DMD-2, and a portion of SLM-1 is used to encode the sign of the error vector. A full experiment diagram is provided in Supplementary Note 1.
Each training iteration consists of optically measuring all of \(a^{(1)}\), \(z^{(2)}\) and \(\delta^{(1)}\). These vectors are used, along with the inputs \(x=a^{(0)}\), to calculate the weight gradients according to Eq. (3) and weight updates, which are then applied to the LC-SLMs. This process is repeated for all the mini-batches until the network converges.
### SA activation
The cell with atomic rubidium vapor is heated to 70 degrees by a simple heating jacket and temperature controller. The laser wavelength is locked to the \(D_{2}\) transition at 780nm.
The power of the forward propagating beam is adjusted to ensure the beam at the vapor cell is intense enough to saturate the absorption, whilst the maximum power of the backward propagating beam is attenuated to approximately 2% of the maximum forward beam power, to ensure a linear response when passing through the cell in the presence of a uniform pump.
In the experiment, the backward probe response does not match perfectly with the simple two-level atomic model, due to two factors.
First, the probe does not undergo 100% absorption even with the pump turnes off. Second, a strong pump beam causes the atoms to fluoresce in all directions, including along the backward probe path. Therefore, the backward signal has a background offset proportional to the forward signal. To compensate for these issues, three measurements are taken to determine the probe response \(\delta^{(1)}\) for each training iteration: pump only; probe only; and both pump and probe. In this way, the background terms due to pump fluorescence and unabsorbed probe could be negated.
**Acknowledgements** This work is supported by Innovate UK Smart Grant 10043476. X.G. acknowledges support from the Royal Commission for the Exhibition of 1851 Research Fellowship.
**Author contributions** X.G. and A.L. conceived the experiment. J.S. carried out the experiment and performed the data analysis. All the authors jointly prepared the manuscript. This work was done under the supervision of A.L.
**Competing interests** The authors declare no competing interests in this work.
|
2306.15599 | Coupling a Recurrent Neural Network to SPAD TCSPC Systems for Real-time
Fluorescence Lifetime Imaging | Fluorescence lifetime imaging (FLI) has been receiving increased attention in
recent years as a powerful diagnostic technique in biological and medical
research. However, existing FLI systems often suffer from a tradeoff between
processing speed, accuracy, and robustness. In this paper, we propose a robust
approach that enables fast FLI with no degradation of accuracy. The approach is
based on a SPAD TCSPC system coupled to a recurrent neural network (RNN) that
accurately estimates the fluorescence lifetime directly from raw timestamps
without building histograms, thereby drastically reducing transfer data volumes
and hardware resource utilization, thus enabling FLI acquisition at video rate.
We train two variants of the RNN on a synthetic dataset and compare the results
to those obtained using center-of-mass method (CMM) and least squares fitting
(LS fitting). Results demonstrate that two RNN variants, gated recurrent unit
(GRU) and long short-term memory (LSTM), are comparable to CMM and LS fitting
in terms of accuracy, while outperforming them in background noise by a large
margin. To explore the ultimate limits of the approach, we derived the
Cramer-Rao lower bound of the measurement, showing that RNN yields lifetime
estimations with near-optimal precision. Moreover, our FLI model, which is
purely trained on synthetic datasets, works well with never-seen-before,
real-world data. To demonstrate real-time operation, we have built a FLI
microscope based on Piccolo, a 32x32 SPAD sensor developed in our lab. Four
quantized GRU cores, capable of processing up to 4 million photons per second,
are deployed on a Xilinx Kintex-7 FPGA. Powered by the GRU, the FLI setup can
retrieve real-time fluorescence lifetime images at up to 10 frames per second.
The proposed FLI system is promising and ideally suited for biomedical
applications. | Yang Lin, Paul Mos, Andrei Ardelean, Claudio Bruschini, Edoardo Charbon | 2023-06-27T16:37:37Z | http://arxiv.org/abs/2306.15599v2 | Coupling a Recurrent Neural Network to SPAD TCSPC Systems for Real-time Fluorescence Lifetime Imaging
###### Abstract
Fluorescence lifetime imaging (FLI) has been receiving increased attention in recent years as a powerful diagnostic technique in biological and medical research. However, existing FLI systems often suffer from a tradeoff between processing speed, accuracy, and robustness. In this paper, we propose a robust approach that enables fast FLI with no degradation of accuracy. The approach is based on a SPAD TCSPC system coupled to a recurrent neural network (RNN) that accurately estimates the fluorescence lifetime directly from raw timestamps without building histograms, thereby drastically reducing transfer data volumes and hardware resource utilization, thus enabling FLI acquisition at video rate. We train two variants of the RNN on a synthetic dataset and compare the results to those obtained using center-of-mass method (CMM) and least squares fitting (LS fitting). Results demonstrate that two RNN variants, gated recurrent unit (GRU) and long short-term memory (LSTM), are comparable to CMM and LS fitting in terms of accuracy, while outperforming them in background noise by a large margin. To explore the ultimate limits of the approach, we derived the Cramer-Rao lower bound of the measurement, showing that RNN yields lifetime estimations with near-optimal precision. Moreover, our FLI model, which is purely trained on synthetic datasets, works well with never-seen-before, real-world data. To demonstrate real-time operation, we have built a FLI microscope based on Piccolo, a 32x32 SPAD sensor developed in our lab. Four quantized GRU cores, capable of processing up to 4 million photons per second, are deployed on a Xilinx Kintex-7 FPGA. Powered by the GRU, the FLI setup can retrieve real-time fluorescence lifetime images at up to 10 frames per second. The proposed FLI system is promising and ideally suited for biomedical applications, including biological imaging, biomedical diagnostics, and fluorescence-assisted surgery, etc.
FLIM SPAD Neural network
## Introduction
Fluorescence lifetime imaging (FLI) is an imaging technique for the characterization of molecules based on time they decay from an excited state to the ground state [1]. Compared with fluorescence intensity imaging, FLI is insensitive to excitation intensity fluctuations, variable probe concentration, and limited photobleaching. Besides, through the appropriate use of targeted fluorophores, FLI is able to quantitatively measure the parameters of the microenvironment around fluorescent molecules, such as pH, viscosity, and ion concentrations[2, 3]. With these advantages, FLI has wide applications in the biological sciences, for example to monitor protein-protein interactions[4], and plays an increasing role in medical and clinical settings such as visualization of tumor margins[5], cancerous tissue detection[1, 6], and computer-assisted robotic surgery[7, 8].
Time-correlated single-photon counting (TCSPC) is popular among FLI systems due to its superiority over other techniques in terms of time resolution, dynamic range, and robustness. In TCSPC, one records the arrival time of individual photons when emitted by molecules upon photoexcitation [9, 10, 11]. After repeated measurements, one can construct a histogram of photon arrivals, which closely matches the true response of molecules, thus enabling the extraction of FLI, as shown Figure 1. The instrumentation of a typical TCPSC FLI system features a confocal setup, including a single-photon detector, a dedicated TCSPC module for time tagging, and a PC for lifetime estimation[12, 9]. Such systems are mostly unsuitable for rising clinical applications such as non-invasive monitoring, where a miniaturized and fast TCSPC system is desired [13]. Besides, the large amount of data generated by TCSPC lays a great burden on data transfer, data storage, and data processing. A powerful PC, sometimes equipped with dedicated GPUs, is required to acquire and process TCSPC data. TCSPC requires photodetectors with picosecond time resolution and single-photon detection capability. In the last decade, single-photon avalanche diodes (SPADs) have been used successfully in TCSPC systems and, with the advent of CMOS SPADs, the expansion of these detectors into high-resolution image sensors for widefield imaging has been accomplished successfully [14]. Several reviews of the use of SPADs in biophotonics have recently appeared [15, 16, 17].
Least-square (LS) fitting and maximum likelihood estimation (MLE) are widely used for fluorescence lifetime estimation[18, 19, 20]. These two methods rely on iterations to achieve high accuracy, but they are time-consuming since computationally expensive. Various non-fitting methods have been proposed to tackle these problems but often compromise on other specifications, among which the Center-of-Mass method (CMM) is a typical one. CMM is a simple, fast, and photon-efficient alternative, which has been already applied in some real-time FLI systems[21, 22, 23]. However, it is very sensitive to background noise, and the estimation is biased due to the use of truncated histograms[24].
Neural networks provide a new path to fast fluorescence lifetime extraction[25]. The first neural network-based model for fluorescence lifetime estimation was proposed in 2016, where higher accuracy and faster processing than LS fitting were reported[26]. Since then, several neural network architectures, including fully connected neural network (FCNN), convolutional neural network (CNN), and generative adversarial network (GAN) solutions have been explored for this end[27, 28, 29, 30, 31]. These techniques showed the ability to resolve multi-exponential decays and achieve accurate and fast estimation even in low photon-count scenarios. Apart from fluorescence lifetime determination, these neural networks can extract high-level features and can be integrated into a large-scale neural network for end-to-end lifetime image analysis such as cancerous tissue margin detection[32] and microglia detection[33].
In this work, we propose to adopt the paradigm of edge artificial intelligence (Edge AI), constructing a recurrent neural network (RNN) -coupled SPAD TCSPC system for real-time FLI. We train and test variants of RNNs for lifetime estimation and deploy them on FPGA to realize event-driven and near-sensor processing. The working principle is illustrated in Figure 1. Upon the arrival of photons, the timestamp is processed by the RNN directly without histogramming. From photon detection to lifetime estimation, the whole system is integrated into a miniaturized device, which achieves reduced data transfer rates. With the flexibility to retrain neural networks, the same system can be easily reused for other very different applications, such as classification.
## Results
The proposed system comprises a SPAD image sensor with timestamping capability coupled to an FPGA for implementation of neural networks _in situ_. In this section we describe the utilized RNN, its training, and the achieved results. We also describe the upperbounds on accuracy that were derived to contextualize the results obtained with the RNNs.
### RNNs trained on Synthetic Datasets and Performance
We train and evaluate RNNs on synthetic datasets. Three RNN variants, namely simple RNN, gated recurrent unit (GRU)[34], and long short-term memory (LSTM)[35], are adopted. These RNNs are constructed with 8, 16, and 32 hidden units, respectively. LS fitting and CMM are also benchmarked. The metrics used for evaluation are the root mean squared error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE). A common range of lifetime is covered here, which is from 0.2 to 5 ns, and we assume a laser repetition frequency of 20 MHz.
We start with a simple situation in which background noise is absent. All the tests are run on a PC with 32-bit floating point (32FP) precision. The results are presented in Table 1. One can observe that CMM achieves the lowest error in MAE and MAPE, GRU-32 achieves the lowest RMSE error, and GRU-32 and LSTM-32 have very similar performance to CMM. The performance of CMM itself is understandable. In this case, background noise is not considered, and the repetition period is 10 times the longest lifetime. Under these conditions, CMM is very close to the maximum likelihood estimator of the lifetime. When comparing Simple RNN, GRU, and LSTM, one can observe that GRU
outperforms LSTM by a small margin, and both of them perform much better than Simple RNN. As we can see with the decrease in model size, errors increase accordingly.
Background noise is often inevitable during fluorescence lifetime imaging, especially in diagnostic and clinical setups where the interruption to existing workflows is supposed to be minimized[13]. In our FLI system, it is estimated that at least 1% of the collected timestamps are from background noise. Therefore, we study the performance of each method under varying background noise levels. For simplicity, only LSTM-32 is used to compare with benchmarks. LSTM-32
\begin{table}
\begin{tabular}{c c c c} \hline \hline Model & RMSE & MAE & MAPE \\ \hline LS Fitting & 0.1642 & 0.1201 & 0.0553 \\ CMM & **0.0915** & **0.0642** & **0.0250** \\ Simple RNN-8 & 0.2516 & 0.1979 & 0.0969 \\ Simple RNN-16 & 0.2396 & 0.1798 & 0.0771 \\ Simple RNN-32 & 0.1877 & 0.1415 & 0.0659 \\ GRU-8 & 0.0957 & 0.0695 & 0.0297 \\ GRU-16 & 0.0928 & 0.0666 & 0.0274 \\ GRU-32 & **0.0908** & **0.0647** & **0.0261** \\ LSTM-8 & 0.0981 & 0.0720 & 0.0423 \\ LSTM-16 & 0.0928 & 0.0669 & 0.0277 \\ LSTM-32 & 0.0916 & 0.0656 & 0.0267 \\ \hline \hline \end{tabular}
\end{table}
Table 1: RNN models are trained and tested on a synthetic dataset, where the fluorescence decay model is mono-exponential, lifetime ranges from 0.2 and 5 ns, laser repetition frequency is 20 MHz, and background noise is not considered. Their performance is benchmarked against Least-square (LS) fitting and Center-of-Mass method (CMM). RMSE: root mean squared error, MAE: mean absolute error, MAPE: mean absolute percentage error.
Figure 1: In a traditional TCSPC FLI system, the sample is excited by a laser repeatedly, and the emission photons are detected and time-tagged. A histogram is gradually built on these timestamps, from which the lifetime can be extracted after the acquisition is completed. In our proposed system, upon the receiving of a photon, the timestamp is fed into the RNN immediately. The RNN updates the hidden state accordingly and idiles for the next photon. The schematic and formula of simple RNN are shown here. At timestep \(n\), the RNN takes the current information \(x_{n}\) and the past information \(h_{n-1}\) as input, then updates the memory to the current information \(h_{n}\) and gives out a prediction \(y_{n}\).
is trained on a synthetic dataset, where 0 to 10% uniform background noise is added to the samples randomly. Here we also illustrate the result of CMM with background noise subtraction, assuming that the number of photons from background noise is known, though it is often not the case in real-time FLI systems. Two synthetic datasets are built for evaluation, where the background noise ratios are 1% (SNR=20dB) and 5% (SNR=12.8dB), respectively. The results are presented in Table 2. We can see that LSTM-32 outperforms other methods in all metrics and scenarios. Combined with Table 1, one can observe that errors increase when the background noise increases for all the methods. However, LSTM and LS fitting are more robust to background noise, while CMM is extremely sensitive to it. This finding is in agreement with previous studies[36, 1].
### Cramer-Rao Lower Bound Analysis
To compare the performance with the theoretical optima, the Cramer-Rao lower bound (CRLB) is calculated of the accuracy of the lifetime estimate with an open source software [36], given the setting parameters. The variance of the lifetime estimation methods is calculated from Monte Carlo experiments. As for CMM and RNN, 3000 samples are used; as for the least-squares method, 1000 samples are used to reduce running time.
The relationship between lifetime and the relative standard deviation of the different estimators is shown in Figure 2a, where the photon count is 1024. One can observe that the variance of CMM and LSTM-32 almost reaches the CRLB, which suggests that CMM and LSTM-32 are near-optimal estimators. Considering that the laser repetition period is much longer than the lifetime and that background noise is not included, it is understandable that CMM reaches the CRLB, since it is approximately a maximum likelihood estimator. LS fitting performs worse than CMM and LSTM-32, which is likely due to the underlying assumption of Gaussian errors.
The relationship between the number of photons and the relative standard deviation of the different estimators is shown in Figure 2b, where the lifetime is set at 2.5 ns. Similar to Figure 2a, the relative standard deviations of CMM and LSTM-32 almost reach the CRLB, while the least square fitting performs worse. This result suggests that CMM and LSTM-32 are efficient estimators over different photon inputs, achieving excellent photon efficiency. They only need less than half of the data to obtain similar results as LS fitting.
We also analyze the CRLB with background noise. The results are shown in Figure 2. Comparing Figure 2c and Figure 2e with Figure 2a, we can see the CRLB is lifted a bit in the presence of background noise. The relative standard deviation of LS fitting stays almost unchanged, and that of LSTM-32 increases slightly but is still much better than LS fitting. As for CMM, one can see that the relative standard deviation increases dramatically at shorter lifetimes, which suggests that CMM is very sensitive to background noise for short lifetimes. By comparing Figure 2d and Figure 2f with Figure 2b, we find that the relative standard deviation does not vary with 1% background noise. With 5% background noise, however, CMM shows a clear degradation of performance, its relative standard deviation getting close to the one of LS fitting.
### Performance on Experimental Dataset
To verify the performance of RNNs, which are purely trained on synthetic datasets, on real-world data, the RNNs are tested on experimental data along with CMM and LS fitting as benchmarks. We prepare a fluorescence lifetime-encoded microbeads sample and acquire the TCSPC data with a commercial confocal FLIM setup. It is estimated that the background noise is below 1%. The LSTM-32 trained with 0% to 10% background noise dataset is used. The corresponding results are shown in Figure 3.
The histograms of the three samples share a similar shape. As for LS fitting, an instrument response function (IRF) is estimated from histograms of all pixels and then shared among them, which accounts for its good performance
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{3}{c}{1\% Background Noise} & \multicolumn{3}{c}{5\% Background Noise} \\ & RMSE & MAE & MAPE & RMSE & MAE & MAPE \\ \hline LS fitting & 0.1678 & 0.1226 & 0.0562 & 0.1883 & 0.1368 & 0.0609 \\ CMM & 0.2367 & 0.2168 & 0.1577 & 1.0742 & 1.0635 & 0.7799 \\ CMM* & 0.1099 & 0.0839 & 0.0456 & 0.2476 & 0.2128 & 0.1444 \\ LSTM-32 & **0.1019** & **0.0733** & **0.0304** & **0.1097** & **0.0784** & **0.0323** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance of LS fitting, CMM, CMM with background subtraction, and LSTM-32 in the presence of 1% and 5% background noise. *CMM with background noise subtraction. LSTM-32 is trained on a dataset including 0% to 10% random background noise for generalization in different scenarios.
Figure 2: Cramer-Rao lower bound analysis when including 0%, 1%, and 5% background noise levels.
here. The result of CMM has a 2 ns bias, which is corrected by the estimated IRF. It is worth noting that for the first peak in the histogram, LSTM shows a sharper Gaussian shape, which confirms LSTM's good performance under low fluorescence intensity and short lifetime.
#### Real-time FLIM Setup with on-FPGA RNN
We further built a real-time FLIM system by utilizing a SPAD array sensor with on-chip time-to-digital converters (TDCs) and deploying the aforementioned RNNs on FPGA for near-sensor processing. The schematic of our setup is shown in Figure 4. The 32\(\times\)32 Piccolo SPAD sensor developed at EPFL [37, 38] is utilized, on which 128 TDCs offer 50 ps temporal resolution. The sensor is controlled by a Kintex-7 FPGA, where four GRU cores are implemented for lifetime estimation. The four GRU cores are able to process up to 4 million photons per second. While the data transfer rate to the PC is 20Mb/s for histogram mode and 80Mb/s for raw mode, it reduces to only 240kb/s when applying the proposed RNN-based lifetime estimation method.
We prepare a sample containing fluorescent beads with a lifetime of 5.5 ns. The sample is measured by our system in real-time at 5 frames per second. During the imaging, we move the sample plate forward to observe the movement of beads in the images. The result is shown in Figure 5. The lifetime images are also displayed in rainbow scale. The average photon count for the beads is around 500 per pixel. This illustrates that our system can capture the movement of beads and provide an accurate lifetime estimation. One can also observe that there are some outliers, e.g. dark blue dots and red dots among the green beads. Apart from statistical fluctuations, RNN-based lifetimes tend to be lower when there are not enough photons, which explains why the blue dots are mostly darker than the surrounding pixels.
## Discussion
The proposed on-FPGA RNN removes the need for histogramming altogether by taking raw timestamps as input directly, which released hardware resources on FPGA or PC and significantly reduced the burden on data transfer and data processing. The analysis of synthetic data and CRLB shows that RNN, as a data-driven method, reaches excellent accuracy and robustness compared to its competitors, while retaining higher photon efficiency.
The performance of the system can be further improved by using a larger SPAD sensor and by accommodating more RNN cores on the FPGA. A more powerful FPGA or even dedicated neural network accelerators can be used to accommodate more RNN cores. More efficient quantization and approximate schemes can also be explored to reduce resource utilization and latency. In addition, these GRU cores can be further optimized by VHDL implementation. In
Figure 3: Comparison of LSTM, CMM, and LS Fitting on experimental data. The sample contains a mixture of fluorescent beads with three different lifetimes (1.7, 2.7, and 5.5 ns). The fluorescence lifetime images are displayed using a rainbow scale, where the brightness represents photon counts and the hue represents lifetimes. The lifetime histograms among all pixels are shown below. Most pixels are assumed to contain mono-exponential fluorophores. Two or three lifetimes might be mixed at the edge of the beads.
Figure 4: Real-time FLIM system based on the Piccolo 32\(\times\)32 SPAD sensor and on-FPGA RNNs. The main body of the microscope is from a single-channel Cerna® Confocal Microscope System (ThorLabs, Newton, New Jersey, United States). On the top is the Piccolo system, composed of the SPAD sensor itself, motherboard, breakout board, and FPGA. The SPAD sensor has 32\(\times\)32 SPADs and 128 on-chip TDCs, offering 50 ps temporal resolution. The FPGA is programmed to control the SPAD sensor and communicate with PC through USB 3. The RNN is also deployed on the same FPGA.
Figure 5: Real-time lifetime image sequence from our FLIM system. The sample contains fluorescent beads with a 5.5 ns reference lifetime. (See the full video in the Supplementary Material)
the future, the RNN cores could be implemented on ASIC and stacked together with SPAD arrays by means of 3-D stacking technology, realizing in-sensor processing[39].
Though the proposed system is only used for FLI to this date, it can be easily adapted for other applications by retraining the RNN. It can be further combined with other large-scale neural networks for high-level applications, where the output of the RNN is composed of high-level features learned by neural networks automatically, and it serves as input for other neural networks. The existing FLI-based high-level applications such as margin assessment[5, 32] could also be directly incorporated into our system.
## Methods
### Dataset
#### Synthetic Dataset
A simulation that well captures the features of the real scene is the key to constructing synthetic datasets. To accurately model a real FLI system, we take fluorescence decay, instrument response, background noise, and dark counts into account. The latter two are often neglected in previous studies. However, in several scenes such as fluorescence-assisted surgery exist strong background noise which can not be simply ignored. Different from existing NN-based methods, which take histograms as input, we generate synthetic datasets on the timestamp level. Assuming that at most one photon reaches the detector in every repetition period (i.e. the pile-up effect is not considered), the timestamps \(t\), namely the arrival time of photons, are modeled as:
\[t=\sum_{i=1}^{N-1}\mathbf{1}_{k=i}(t_{fluo_{i}}+t_{irf})+\mathbf{1}_{k=N}t_{bg}, \tag{1}\]
where \(\mathbf{1}\) is the indicator function, \(k\) is the component indicator, \(t_{fluo}\) is the fluorescence time delay, \(t_{irf}\) is the instrument response time delay, and \(t_{bg}\) is the arrival time of background noise or dark counts.
The component indicator \(k\) is a random variable with categorical distribution, representing the source of the incoming photon, which can be either a component of the fluorescence decay or background noise. The probability density function (PDF) of \(k\) is
\[f(k|\mathbf{p})=\prod_{i=1}^{N}p_{i}^{\mathbf{1}_{k=i}}, \tag{2}\]
where \(p_{i}\) represents the normalized intensity of fluorescence or background noise.
The fluorescence time delay \(t_{fluo_{i}}\) is subject to an exponential distribution. Its PDF is:
\[f(t_{fluo_{i}}|\tau_{i})=\frac{1}{\tau_{i}}e^{-\frac{t_{fluo_{i}}}{\tau_{i}}}, \tag{3}\]
where \(\tau_{i}\) is the lifetime of the fluorescence decay.
The instrument response time delay \(t_{irf}\) is subject to a Gaussian distribution. Its PDF is:
\[f(t_{irf}|t_{0},\sigma)=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{1}{2}\left(\frac {t_{irf}-t_{0}}{\sigma}\right)^{2}}, \tag{4}\]
where \(t_{0}\) is the peak position, and \(\sigma\) can be represented by full width at half maximum(FWHM):
\[\sigma=\frac{FWHM}{2\sqrt{2\ln 2}}. \tag{5}\]
The time of arrival of background noise \(t_{bg}\) is subject to a uniform distribution. Its PDF is:
\[f(t_{bg}|T)=\frac{1}{T}, \tag{6}\]
where \(T\) is the repetition period.
Given a set of the above parameters, synthetic datasets can be generated with different lifetime ranges, background noise ratios, and components of fluorescence. In this work, the FWHM is assumed to be 167.3 ps, in accordance with previous studies[29, 30]; \(t_{0}\) for each sample is generated from a uniform distribution from 0 to 5 ns. To train models in the presence of background noise, \(p_{N}\) for each sample is generated from a uniform distribution from 0 to \(10\%\). Each dataset contains 500,000 samples, and each sample contains 1024 timestamps.
### Experimental Dataset
Testing the model, which is purely trained on synthetic data, on experimental data is essential to ensure its applicability in real-world scenarios, thus an experimental dataset is curated for evaluation. Fluorescent beads from PolyAn with reference lifetimes of 1.7, 2.7, and 5.5 ns are adopted as sample. The beads are made of 3D-carboxy, with a diameter of 6.5 \(\mu\)m. The excitation wavelength is around 488 nm and the emission spectra are 602-800 nm, 545-800 nm, and 559-718 nm, respectively. The fluorescence intensity of these three beads is around 1:2:5. Fluorescent beads with different lifetimes are mixed together with all possible combinations, and put in a 384-well plate for imaging.
A commercial FLIM system, available at the Bioimaging and Optics Platform (BIOP) of EPFL, is utilized to measure the sample and acquire the experimental data. A confocal microscope (Leica SP 8, w/ HyD SMD detector) is used for imaging, a super-continuum laser (NKT Photonics, Superk Extreme EXW-45) is used for illumination, and a TCSPC module (PicoHarp 300 TCSPC) is used for time-tagging. The sample is excited under a 20 MHz laser, corresponding to a repetition period of 50 ns. The excitation wavelength is 486 nm and the spectrum of the emission filter ranges from 600 to 700 nm. The temporal resolution of time-tagging is 16 ps.
### Neural Network
The neural network is first built, trained, and evaluated on the PC with PyTorch[40]. Then its weights are quantized and the activation functions are approximated. After that, the neural network is written in C/C++, loading the quantized weights and approximated activation functions, and is further translated into hardware description language (HDL) by Vitis High-level Synthesis (HLS).
### Model
Three RNN variants are adopted here, which are simple RNN, GRU, and LSTM. The default models in PyTorch are used, whereas the input size is 1, so selected to accommodate the timestamps. Considering the hardware limitation, only single-layer RNNs are considered. The hidden sizes range from 8 to 64. Since the timestamps are processed in real-time and are not stored, bidirectional RNNs cannot be used. An FCNN with one hidden layer takes the hidden state as input to predicts the lifetime.
### Training
Normally, the loss function for RNNs is built on the output of the last timestep or the average output of all timesteps. In fluorescence lifetime estimation, the performance of estimators is supposed to be improved with more photons. Under this principle, we design a weighted mean square percentage error (MSPE) function, assigning more importance to subsequent timesteps:
\[L(\mathbf{y},\mathbf{\hat{y}})=\sum_{i=1}^{N}w_{i}\left(\frac{y_{i}-\hat{y}_{ i}}{y_{i}}\right)^{2}, \tag{7}\]
where \(N\) is the number of timesteps, \(\mathbf{y}\) is the ground truth, \(\mathbf{\hat{y}}\) the prediction, and \(w_{i}\) the weight at timestep \(i\):
\[w_{i}=\frac{1}{1+e^{-\left(\frac{i-N/4}{N/4}\right)}}. \tag{8}\]
The weights for hidden states are initialized by an orthogonal matrix. All biases are initialized with 0s. For LSTM, the weights for cell states are initialized by Xavier initialization [41], and the bias for forget gates is initialized with 1s.
The dataset is randomly split into training, evaluation, and test set, with the ratio of sizes being 8:1:1. The batch size is 32. Adam optimizer is used with an initial learning rate of 0.001[42]. The learning rate decays every 5 epochs at the rate of 0.9. The whole training process takes 100 epochs.
### Evaluation
Three metrics are used to evaluate the performance of RNNs and benchmarks on synthetic data, which are:
\[\mathrm{RMSE}=\frac{\sqrt{\sum_{i=1}^{N}\left(y_{i}-\hat{y}_{i}\right)^{2}}}{N}, \tag{9}\]
\[\mathrm{MAE}=\frac{\sum_{i=1}^{N}\left|y_{i}-\hat{y}_{i}\right|}{N}, \tag{10}\]
\[\mathrm{MAPE}=\frac{\sum_{i=1}^{N}|\frac{y_{i}-\hat{g}_{i}}{y_{i}}|}{N}. \tag{11}\]
#### Cramer-Rao Lower Bound
Cramer-Rao lower bound (CRLB) gives the best precision that can be achieved in the estimation of fluorescence lifetime[43, 44, 36]. Mathematically, CRLB expresses a lower bound of variance of estimators and it is proportional to the inverse of the Fisher information:
\[Var(\hat{\theta})\geq\frac{(f^{\prime}(x;\theta))^{2}}{\mathcal{J}(\theta)}, \tag{12}\]
where \(f(x;\theta))\) is the PDF and \(\mathcal{J}\) is the Fisher Information, which is defined as:
\[\mathcal{J}(\theta)=nE_{\theta}\left[\left(\frac{\partial}{\partial\theta} \ln f(x;\theta)\right)^{2}\right]. \tag{13}\]
The CRLB is calculated with open-source software[36].
#### FPGA Implementation
Quantization is an effective way to reduce resource utilization and latency on hardware. In common deep learning frameworks, such as PyTorch or Tensorflow, model weights and activations are represented by 32-bit floating point numbers. However, it would be inefficient to perform operations for floating point numbers with such bitwidth. We aim to quantize the 32-bit floating point numbers with fixed-point numbers and to reduce the bitwidth as much as possible, while maintaining the same model behavior.
Both PyTorch and TensorFlow provide tools of quantization for edge devices, namely PyTorch Quantization and TensorFlow Lite. However, the quantized models rely on their own libraries to run, and the quantized weights cannot be exported. Therefore, we use Python and an open-source fixed point number library to realize a quantized GRU for evaluation. We compare 8-bit, 16-bit, and 32-bit fixed-point numbers to quantize weights and activations separately. The results show that the weights can be quantized to 16-bit fixed point numbers without a significant accuracy drop, and to 8-bit fixed point numbers with an acceptable accuracy drop. Activations can be quantized to 16-bit fixed point numbers without a significant accuracy drop, but 8-bit fixed point quantization will lead the model to collapse. Besides the fixed point precision, we find that the rounding method has a great impact on the performance. Truncating, often the default rounding method, brings larger error. Fixed point numbers with convergent rounding have almost the same behavior as floating point numbers.
The quantized GRU model is then implemented on FPGA. For convenience, the GRU is written in C++ and compiled to Vivado IP with Vitis HLS. The whole model is divided into two parts: a GRU core and an FCNN. The GRU core is designed to be shared among a group of pixels, and the FCNN will be run sequentially for each pixel after integration. Upon receiving a timestamp, GRU core loads hidden states from block RAMs (BRAMs), updates the hidden states, and sends them back to BRAM. After the integration of each repetition period, the FCNN loads the hidden state from BRAM, and streams the estimated lifetime to a FIFO.
#### Experimental Setup
A real-time FLI microscopy (FLIM) system with our SPAD sensor and on-FPGA RNN is built, which is shown in Figure. 4. The microscope is adapted from the sa confocal microscope system (Single-Channel Cerna(r) Confocal Microscope System), though it is only used for widefield imaging in this work. The same fluorescent bead samples are measured, hence a 480 nm pulsed laser (PicoQuant) is utilized. A set of filters is adopted for fluorescence imaging. The excitation filter (Thorlabs FITC Excitation Filter) has a central wavelength of 475 nm with a bandwidth of 35 nm. The emission filter is a long-pass filter (Thorlabs \(\mathcal{O}\)25.0 mm Premium Longpass Filter) with a cut-on wavelength of 600 nm. The dichroic filter (Thorlabs GFP Dichroic Filter) has a reflection band from 452 to 490 nm and a transmission band from 505 nm to 800 nm.
The Piccolo system is used for single-photon detection and time tagging[38]. The complete system, along with its components and a micrograph of the Piccolo chip is shown in Figure 4. Piccolo provides 50-ps temporal resolution and 47.8% peak photon detection probability (PDP). Versions with microlenses are available as well, to improve the light
collection efficiency. The median dark count rate (DCR) is 113 cps (per pixel at room temperature). A Xilinx FPGA was used to communicate with the PC and control the sensor. To minimize the system and reduce latency, the RNNs were deployed on the same FPGA.
The schematic of the FPGA design is shown in Figure 6. Four computation units are realized, each of which is in charge of a quarter of the sensor (32 by 8 pixels). The timestamps, sent to FPGA in parallel, are serialized and distributed to four computation units based on their SPAD IDs. Each computation unit is composed of one GRU core, one two-layer fully connected neural network (FCNN) core, and one BRAM. The computation speed is mainly limited by the latency of the GRU core, which is 1.05 ns when employing a 160 MHz clock. The photons that arrive when computation units are busy are simply discarded. The four computation units together are capable of processing up to 4 million photons per second.
|
2306.16090 | Empirical Loss Landscape Analysis of Neural Network Activation Functions | Activation functions play a significant role in neural network design by
enabling non-linearity. The choice of activation function was previously shown
to influence the properties of the resulting loss landscape. Understanding the
relationship between activation functions and loss landscape properties is
important for neural architecture and training algorithm design. This study
empirically investigates neural network loss landscapes associated with
hyperbolic tangent, rectified linear unit, and exponential linear unit
activation functions. Rectified linear unit is shown to yield the most convex
loss landscape, and exponential linear unit is shown to yield the least flat
loss landscape, and to exhibit superior generalisation performance. The
presence of wide and narrow valleys in the loss landscape is established for
all activation functions, and the narrow valleys are shown to correlate with
saturated neurons and implicitly regularised network configurations. | Anna Sergeevna Bosman, Andries Engelbrecht, Marde Helbig | 2023-06-28T10:46:14Z | http://arxiv.org/abs/2306.16090v1 | # Empirical Loss Landscape Analysis of Neural Network Activation Functions
###### Abstract.
Activation functions play a significant role in neural network design by enabling non-linearity. The choice of activation function was previously shown to influence the properties of the resulting loss landscape. Understanding the relationship between activation functions and loss landscape properties is important for neural architecture and training algorithm design. This study empirically investigates neural network loss landscapes associated with hyperbolic tangent, rectified linear unit, and exponential linear unit activation functions. Rectified linear unit is shown to yield the most convex loss landscape, and exponential linear unit is shown to yield the least flat loss landscape, and to exhibit superior generalisation performance. The presence of wide and narrow valleys in the loss landscape is established for all activation functions, and the narrow valleys are shown to correlate with saturated neurons and implicitly regularised network configurations.
neural networks, activation functions, loss landscape, fitness landscape analysis +
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
Background
This study is concerned with the relationship between the activation functions employed in a NN architecture, and the resulting NN loss landscape properties. To that extent, the following topics are covered: Section 2.1 discusses the activation functions and their relevance. Section 2.2 provides a literature review of the existing loss landscape studies in the context of activation functions. Section 2.3 discusses the fitness landscape analysis techniques employed in this study.
### Activation functions
An activation function is applied to the _net_ input signal received by a neuron in order to decide whether the neuron should fire or not. Non-linear activation functions introduce non-linearity to NNs, and thus enable the universal function approximation properties (Kraus et al., 2017).
Activation functions fall into two categories: bounded and unbounded. One of the earliest smooth bounded activation functions is the sigmoid (S
is determined by using the negative gradient of the loss function. For a step \(\vec{x}_{l}\), the gradient vector \(\vec{g}_{l}\) is calculated. Then, \(\vec{g}_{l}\) is used to generate a binary mask \(\vec{b}_{l}\):
\[b_{lj}=\begin{cases}0&\text{if }g_{lj}>0,\\ 1&\text{otherwise},\end{cases}\]
where \(j\in\{1,\dots,m\}\) for \(\vec{g}_{l}\in\mathbb{R}^{m}\). The next step \(\vec{x}_{l+1}\) is obtained using the progressive random walk (PRW) (Kang et al., 2017). A single step of PRW constitutes randomly generating a step vector \(\Delta\vec{x}_{l}\in\mathbb{R}^{m}\), such that \(\Delta x_{lj}\in[0,\epsilon]\ \forall j\in\{1,\dots,m\}\), and determining the sign of each \(\Delta x_{lj}\) using the corresponding \(b_{lj}\) as
\[\Delta x_{lj}:=\begin{cases}-\Delta x_{lj}&\text{if }b_{lj}=0,\\ \Delta x_{lj}&\text{otherwise}.\end{cases}\]
To summarise, PGW randomises the magnitude of \(\Delta\vec{x}_{l}\) per dimension, and sets the direction according to \(\vec{g}_{l}\). Therefore, gradient information is combined with stochasticity, generating \(X^{\prime}\) that explores the areas of low error.
#### 2.3.2. Loss-gradient clouds
Loss-gradient clouds (LGCs) were first introduced in (Kang et al., 2017) as a visualisation tool for the purpose of empirically establishing the presence and characteristics of optima in NN error landscapes. To construct LGCs, the weight space of a NN is sampled. Once an appropriate number of samples has been obtained, LGC is generated by constructing a scatter plot with loss (i.e. error) values on the \(x\)-axis, and gradient magnitude (i.e. norm) on the \(y\)-axis. Points of zero error and zero gradient correspond to global minima. Points of non-zero error and zero gradient correspond to local minima or saddle points, i.e. suboptimal critical points. The local convexity of the points can be further identified using Hessian matrix analysis: positive eigenvalues of the Hessian indicate convex local minima, and a mixture of positive and negative eigenvalues indicate saddle points.
LGCs were successfully used to investigate the minima associated with different loss functions (Kang et al., 2017) as well as NN architectures (Kang et al., 2017). This study is a natural extension of (Kang et al., 2017; Kang et al., 2017), where the focus is shifted to the activation functions and their effect on the loss landscapes.
## 3. Experimental Procedure
This study aimed to empirically investigate the modality, i.e. local minima and their properties, associated with different activation functions in the hidden layer of a NN. Feed-forward NNs with a single hidden layer and the cross-entropy loss function were used in the experiments. As previously discussed in Section 2.1, TanH, ReLU, and ELU were considered for the hidden layer. For the output layer neurons, the sigmoid activation function was used. Section 3.1 below lists the benchmark problems used, and Section 3.2 outlines the sampling parameters.
### Benchmark problems
Four real world classification problems of varied dimensionality and complexity were used in the experiments. Table 1 summarises the NN architecture parameters and corresponding NN dimensionality per dataset. Sources of each dataset and/or NN architectures are also specified. The following datasets were considered in this study:
1. **XOR:** Exclusive-or (XOR) refers to the XOR logic gate, modelled by a NN with two hidden neurons. The dataset consists of 4 binary patterns only, but is not linearly separable.
2. **Iris:** The Iris data set (Kang et al., 2017) is a collection of 50 samples per three species of irises: _Setosa_, _Versicolor_, and _Virginica_, comprising 150 patterns.
3. **Heart:** This is a binary classification problem comprised of 920 samples, where each sample is a collection of various patient readings pertaining to heart disease prediction (Kang et al., 2017).
4. **MNIST:** The MNIST dataset (Kang et al., 2017) comprises 70000 examples of handwritten digits from 0 to 9. Each digit is stored as a \(28\times 28\) grey scale image. The 2D input is flattened into a 1D vector for the purpose of this study.
For all problems except XOR, the inputs were \(z\)-score standardised. Binary classification problems used binary output encoding, while multinomial problems were one-hot encoded. All code used is available at [https://github.com/annabosman/fla-in-tf](https://github.com/annabosman/fla-in-tf).
### Sampling parameters
PGW was used to sample the areas of low error. To ensure adequate solution space coverage, the number of independent PGWs was set to \(10\times m\), where \(m\) is the dimensionality of the problem. Since gradient-based methods are sensitive to the starting point (Kang et al., 2017), two initialisation ranges for PGW were considered: \([-1,1]\) and \([-10,10]\). Malan and Engelbrecht (Malan and Engelbrecht, 2018) observed that the step size of the walk influences the resulting FLA metrics. As such, two step size settings were adopted in this study: maximum step size = 1% of the initialisation range (micro), and maximum step size = 10% of the initialisation range (macro). Micro walks were executed for 1000 steps, and macro walks were executed for 100 steps.
For all problems with the exception of XOR, the 80/20 training/testing split was used. The gradient (\(G_{t}\)) and the error (\(E_{t}\)) of the current PGW point was calculated using the training set. The generalisation error (\(E_{g}\)) was calculated on the test set for each point on the walk. For MNIST, a batch size of 100 was used. For the remainder of the problems, full batch (entire train/test subset) was used. To identify minima discovered by PGWs, the \(G_{t}\) magnitude and \(E_{t}\) value were recorded per step. Eigenvalues of the Hessian were calculated for all problems, except MNIST (due to computational constraints), to determine if a PGW point is convex, concave, saddle, or singular.
## 4. Empirical Study of Modality
Empirical results of the study are presented in this section. Sections 4.1 to 4.4 provide per-problem analysis of the three hidden neuron activation functions.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Problem** & \# Input & \# Hidden & \# Output & Dimensionality & Source \\ \hline XOR & 2 & 2 & 1 & 9 & (Kang et al., 2017) \\ Iris & 4 & 4 & 3 & 35 & (Kang et al., 2017) \\ Heart & 32 & 10 & 1 & 341 & (Kang et al., 2017) \\ MNIST & 784 & 10 & 10 & 7960 & (Kang et al., 2017) \\ \hline \hline \end{tabular}
\end{table}
Table 1. Benchmark Problems and the NN Architectures
### Xor
Fig. 1 and 2 show the LGCs for the XOR problem under the micro and macro settings. The sampled points are separated into panes according to curvature. Fig. 1 shows that the activation functions have all yielded four stationary attractors (points of zero gradient), three of which constituted local minima. However, the loss landscape characteristics around the local minima varied for the three activation functions.
TanH exhibited a landscape with a clear transition from the saddle to the convex curvature, i.e. saddle and convex curvatures did not overlap. PGWs generally descended to a stationary point in the saddle space, and then made the transition to one of the convex minima. The three convex minima discovered were disconnected, i.e. the walks did not make transitions between the convex minima.
For ReLU (Fig. 1b) there was no clear separation between convex, saddle, and indefinite curvatures in the different areas sampled. Even though two convex local minima were discovered, these minima were evidently less attractive to PGWs than the global minimum. This observation indicates that a gradient-based algorithm is less likely to become trapped in a local minimum. The reason for this is probably the presence of saddle and indefinite (flat) points around the local minima, providing pathways through which a gradient-based algorithm can escape. Additionally, vertical clusters are observed around the local minima, indicating that diverse gradient information was available in the neighbourhood of the local minima. The predominance of indefinite curvature is explained by the fact that the ReLU function outputs zero for all negative inputs, which inevitably causes flatness.
landscape associated with ReLU was rugged and inconsistent when observed at a larger scale. Fewer walks have discovered convex global minima. Most points exhibited flatness. Thus, although ReLU offers a higher chance of escaping local minima, searchability of the loss landscape suffers, and becomes more dependent on the chosen step size.
ELU (see Fig. 2c) exhibited more evident structure than ReLU, and stronger gradients than TanH. Compared to TanH and ReLU, ELU did not exhibit any flatness, i.e. indefinite Hessians, which indicates that the loss landscape associated with ELU was more searchable.
Fig. 3 shows some of the LGGs obtained for the \([-10,10]\) PGWs. The \(x\)-axis is shown in square-root scale for readability. The same four stationary attractors were sampled for TanH, with two convex local minima. ReLU and ELU both yielded much higher errors than TanH, although convergence to the global minimum still took place. ReLU sampled the most flat curvature points, and ELU sampled the most saddle points. Both ReLU and ELU also exhibited two clusters: points of low error and high gradient, and points of low gradient and high error. These are attributed to the narrow and wide valleys present in the landscape, which could not be observed on the smaller scale. Out of the three activation functions, ELU exhibited the least number of indefinite curvature points. Thus, ELU yielded the most searchable loss landscape for the XOR problem.
### Iris
Fig. 4 shows the LGGs obtained for the Iris problem sampled with the \([-1,1]\) micro walks. The LGGs for the three activation functions exhibited a similar shape, but different curvature properties. Only one major attractor around the global minimum was discovered.
Fig. 4a shows that TanH was dominated by the saddle curvature. The sampled points were split into two clusters around the global minimum, namely points of higher error and lower gradient, and points of higher gradient and low error. The points of high gradient and low error overlapped with the points of indefinite curvature, indicating that this cluster exhibited flatness. Note that the flatness, i.e. lack of curvature in a particular dimension indicates that the corresponding weight did not contribute to the final prediction of the NN. For example, if a neuron is saturated, i.e. the neuron always outputs a value close to the asymptote, then the contribution of a single weight may become negligible. Therefore, indefinite curvature is likely associated with saturated neurons. However, neuron saturation is not the only explanation for non-contributing weights: if certain weights are unnecessary for the minimal solution to the problem at hand, then techniques such as regularisation can be employed to reduce the unnecessary weights to zero. Therefore, solutions with non-contributing weights can also be associated with regularised models. Smaller (regularised) architectures are
Figure 4. LGGs for the micro PGWs for Iris.
"embedded" in the weight space of larger NN architectures, and can therefore be discovered by optimisation algorithms by setting the unnecessary weights to zero [24]. A hypothesis is made that flat areas surrounding the global attraction basin correspond to the non-contributing weights, indicative of saturation. Some of the solutions with saturated neurons correspond to the minima that require fewer weights than available in the architecture. This hypothesis has the implication that the steep gradient attractor associated with indefinite curvature contains points of both poor and good performance, corresponding to unwanted saturation and implicit regularisation, respectively.
For ReLU, the prevalence of convex curvature is evident in Fig. 4b. Again, ReLU exhibited more flatness than the other two activation functions. Such behaviour is attributed to the hard saturation of ReLU, which can easily yield non-contributing weights. ELU (see Fig. 4c) was dominated by the saddle curvature. The least amount of flatness was discovered for ELU, making ELU a good choice for algorithms that rely on the gradient information.
Fig. 5 shows the LGGs obtained for the \([-1,1]\) macro walks. Larger step sizes revealed similar curvature tendencies for the three activation functions, with TanH exhibiting convexity at the global minimum, but being dominated by saddle points otherwise, ReLU exhibiting strong convexity, and ELU exhibiting no convexity and almost no flatness. All activation functions yielded a split into high error, low gradient, and high gradient, low error clusters. The high gradient, low error clusters were associated with indefinite Hessians for the three activation functions. This behaviour is again attributed to the embedded minima that require fewer weights, as well as hidden unit saturation. Larger steps are likely to arrive at larger weights, thus increasing the chances of saturation.
To test the saturation hypothesis, the degree of saturation was measured for TanH and ReLU. For TanH, the \(\zeta_{h}\) saturation measure proposed in [30] for bounded activation functions was used. The value of \(\zeta_{h}\) is in the \([0,1]\) continuous range, where \(0\) corresponds to a normal distribution of hidden neuron activations, \(0.5\) corresponds to a uniform distribution of hidden neuron activations, and \(1\) corresponds to a saturated distribution of hidden neuron activations, where most activations lie on the asymptotic ends. For ReLU, the proportion of zero activations for all hidden neurons was used as an estimate of saturation. Fig. 6 shows the box plots generated for TanH and ReLU for a few selected scenarios. Fig. 6 shows that singular curvature (indefinite Hessians) was indeed associated with higher saturation under various scenarios, especially under the \([-1,1]\) macro setting, which yielded a steep gradient cluster of indefinite points.
Fig. 7 shows the LGGs for the \([-10,10]\) walks under the macro settings. The \(x\)-axis is shown in square-root scaling for readability. According to Fig. 7a, TanH exhibited very few convex points. Large step sizes prevented the walks from converging to a convex basin, indicating that the width of the convex basin must have been smaller than \(2\) (maximum step size calculated as \(10\%\) of the \([-10,10]\) initialisation range). ReLU exhibited the most convexity out of the three activation functions, and the most flatness. Thus, ReLU yielded a wider convex attraction basin than TanH, and was also more likely to saturate. ELU exhibited little to no convexity, and flatness only along the steeper cluster, associated with embedded minima and/or saturation. The split into two clusters was still
Figure 5. LGGs for the macro PGWs for Iris.
Figure 6. Box plots illustrating the degree of saturation associated with the different curvatures for Iris.
evident for all activation functions. In general, with the increase in step size, the steep cluster became heavier than the shallow cluster. Thus, the steep cluster is confirmed to be associated with large weights, which cause saturation.
To further study the generalisation behaviour of the three activation functions, Fig. 8 shows the LGGs for the micro walks colourised according to their \(E_{g}\) values. Fig. 8 illustrates for the \([-1,1]\) setting that ELU exhibited the best generalisation behaviour. For the \([-10,10]\) setting, the low error, high gradient cluster generalised better than the high error, low gradient cluster for both ReLU and ELU. If the cluster of steep gradients corresponds to the embedded minima comprised of fewer contributing weights, then the steeper gradient solutions can be considered regularised, which explains better generalisation performance.
### Heart
For the heart problem, the split into two clusters was once again evident for all activation functions (figures not shown for brevity). Fig. 9 illustrates that the indefinite points corresponded to points of higher saturation for both TanH and ReLU. Further, Fig. 10 shows the LGGs for TanH and ReLU, colourised according to the estimated degree of saturation. For both activation functions, the steeper gradient cluster was clearly associated with the saturated neurons.
Fig. 11 shows the LGGs colourised according to \(E_{g}\) values. For the \([-1,1]\) walks, all activation functions yielded deteriorating \(E_{g}\) values as \(E_{t}\) approached zero. However, the band of good solutions was noticeably wider for ReLU and ELU as compared to TanH,
Figure 11. LGGs colourised according to \(E_{g}\) values for Heart, micro PGWs, \([-1,1]\) initialisation, \(E_{t}<0.05\).
Figure 8. LGGs colourised according to the \(E_{g}\) values for Iris, micro PGWs, \(E_{t}<0.05\).
Figure 10. LGGs for the macro PGWs initialised in the \([-10,10]\) range for Heart, colourised according to saturation.
Figure 7. LGGs for the macro PGWs for Iris.
Figure 9. Box plots illustrating the degree of saturation associated with the different curvatures for Heart, \([-10,10]\).
indicating that it was easier to find a good quality solution on the ReLU and ELU loss landscapes.
### Mnist
Due to the high dimensionality of MNIST, Hessians were not calculated for this experiment. Fig. 12 and 13 show the LGCs colourised according to \(E_{g}\) values. Convergence to a single global attractor is evident for all activation functions. Out of the activation functions considered, ELU once again yielded the most consistent generalisation performance.
Fig. 12 shows for the \([-1,1]\) interval that ReLU and ELU yielded much higher gradients than TanH. The two-cluster split was evident for all activation functions. For ReLU and ELU, the steep gradient cluster generally yielded poorer generalisation performance than the shallow gradient cluster. This behaviour is especially evident for ReLU under the macro setting (see Fig. 12e). The poor generalisation performance of high gradient points is attributed to neuron saturation.
Fig. 13 shows for the \([-10,10]\) range that TanH yielded an LGC without a well-formed structure. Low gradients and a noisy LGC indicate that the TanH error landscape was not very searchable under the \([-10,10]\) setting. ReLU and ELU also deteriorated with the increased step size and initialisation range, but not as drastically as TanH. The relative resilience of ReLU and ELU to the step size is attributed to the high gradients generated by these unbounded activation functions. ReLU and ELU evidently yielded a more searchable landscape for the high-dimensional MNIST problem, which correlates with the current deep learning insights (Bosman et al., 2017).
The split into two clusters became more pronounced for ReLU and ELU under the \([-10,10]\) setting, confirming the presence of narrow and wide valleys in the landscape. For ELU, points of good \(E_{g}\) values were found in the steep cluster, likely due to the embedded minima discovered. ReLU yielded higher \(E_{t}\) and \(E_{g}\) values than ELU, confirming that the ELU error landscape was more resilient to overfitting.
## 5. Conclusions
This paper empirically analysed the effect of using three different activation functions on the resulting NN error landscapes: TanH, ReLU, and ELU, used in the hidden layer. All experiments were conducted under four granularity settings, with different step sizes and initialisation ranges.
The choice of activation function did not have an effect on the total number of unique attractors (local minima) in the search space, but affected the properties of the discovered basins of attraction. ReLU and ELU yielded steeper attraction basins with stronger gradients than TanH. ReLU exhibited the most convexity, and ELU exhibited the least flatness. The stationary points exhibited by ReLU and ELU were generally more connected than the ones exhibited by TanH, indicating that ReLU and ELU yield more searchable landscapes. However, ReLU and ELU exhibited stronger sensitivity to the step size and the initialisation range than TanH.
All activation functions yielded a split into high error, low gradient, and high gradient, low error clusters. The high gradient cluster was associated with indefinite (i.e. flat) curvature, caused by inactive, or non-contributing weights. Sample points in this cluster were often associated with poor generalisation. Thus, high (or steep) gradients were attributed to narrow valleys, associated with saturated neurons. In individual cases, points of high generalisation performance were discovered in the steep gradient clusters. These were attributed to the embedded self-regularised minima.
Out of the three activation functions considered, ELU consistently exhibited superior generalisation performance. Thus, the loss landscape yielded by the ELU activation function was the most resilient to overfitting.
In future, a more in-depth study of the self-regularised solutions will be conducted. If their unique loss landscape properties can be determined, algorithms may be developed that converge to areas with good generalisation properties that contain smaller NN architectures.
###### Acknowledgements.
This research was supported by the National Research Foundation (South Africa) Thuthuka Grant Number 13819413. The authors acknowledge the Centre for High Performance Computing (CHPC), South Africa, for providing computational resources to this research project.
Figure 12. LGCs for PGWs initialised in the \([-1,1]\) range for MNIST.
Figure 13. LGCs for PGWs initialised in the \([-10,10]\) range for MNIST. |
2302.02506 | Generating Dispatching Rules for the Interrupting Swap-Allowed Blocking
Job Shop Problem Using Graph Neural Network and Reinforcement Learning | The interrupting swap-allowed blocking job shop problem (ISBJSSP) is a
complex scheduling problem that is able to model many manufacturing planning
and logistics applications realistically by addressing both the lack of storage
capacity and unforeseen production interruptions. Subjected to random
disruptions due to machine malfunction or maintenance, industry production
settings often choose to adopt dispatching rules to enable adaptive, real-time
re-scheduling, rather than traditional methods that require costly
re-computation on the new configuration every time the problem condition
changes dynamically. To generate dispatching rules for the ISBJSSP problem, we
introduce a dynamic disjunctive graph formulation characterized by nodes and
edges subjected to continuous deletions and additions. This formulation enables
the training of an adaptive scheduler utilizing graph neural networks and
reinforcement learning. Furthermore, a simulator is developed to simulate
interruption, swapping, and blocking in the ISBJSSP setting. Employing a set of
reported benchmark instances, we conduct a detailed experimental study on
ISBJSSP instances with a range of machine shutdown probabilities to show that
the scheduling policies generated can outperform or are at least as competitive
as existing dispatching rules with predetermined priority. This study shows
that the ISBJSSP, which requires real-time adaptive solutions, can be scheduled
efficiently with the proposed method when production interruptions occur with
random machine shutdowns. | Vivian W. H. Wong, Sang Hun Kim, Junyoung Park, Jinkyoo Park, Kincho H. Law | 2023-02-05T23:35:21Z | http://arxiv.org/abs/2302.02506v2 | Generating Dispatching Rules for the Interrupting Swap-Alllowed Blocking Job Shop Problem Using Graph Neural Network and Reinforcement Learning
###### Abstract
The interrupting swap-allowed blocking job shop problem (ISB/SSP) is a complex scheduling problem that is able to model many manufacturing planning and logistics applications realistically by addressing both the lack of storage capacity and unforeseen production interruptions. Subjected to random disruptions due to machine malfunction or maintenance, industry production settings often choose to adopt dispatching rules to enable adaptive, real-time re-scheduling, rather than traditional methods that require costly re-computation on the new configuration every time the problem condition changes dynamically. To generate dispatching rules for the ISBJSSP problem, a method that uses graph neural networks and reinforcement learning is proposed. ISBJSSP is formulated as a Markov decision process. Using proximal policy optimization, an optimal scheduling policy is learnt from randomly generated instances. Employing a set of reported benchmark instances, we conduct a detailed experimental study on ISR/SSP instances with a range of machine shutdown probabilities to show that the scheduling policies generated can outperform or are at least as competitive as existing dispatching rules with predetermined priority. This study shows that the ISBJSSP, which requires real-time adaptive solutions, can be scheduled efficiently with the proposed machine learning method when production interruptions occur with random machine shutdowns.
Smart Manufacturing, Job Shop Problems, Priority Dispatching Rule, Machine Learning, Reinforcement Learning, Graph Neural Networks
## 1 Introduction
Effective scheduling strategies to various production scheduling problems have been widely studied in academia and industry with the goal to streamline manufacturing systems and hence to improve production efficiency. For example, the classical job shop scheduling problem (JSSP), which aims to find optimal assignment of jobs composed of operations given some prescribed machine sharing and precedence constraints, is an NP-hard combinatorial optimization problem that finds many practical applications. Many manufacturing systems in the real settings, however, have more constraints to consider than the capacity of machines. For example, many components of vehicles and machines are often expensive items that are huge in size. It is therefore not desirable to have to invest in the storage of intermediate components and products [1]. The lack of storage capacity is therefore a constraint in this case. Furthermore, unforeseen interruptions to production, such as machine shutdowns, could occur that changes the list of available machines. To model modern manufacturing systems more realistically by considering both the _lack of storage capacities_ and _production interruptions_, this work studies a new class of job scheduling problem, the interrupting swap-allowed blocking job shop scheduling problem (ISBJSSP). Many methods, such as mathematical optimization [2], branch-and-bound search [3] and meta-heuristic algorithms [4, 5], have been developed to generate optimum or near-optimum solutions to the JSSP problems. However, these solutions are not adaptive, requiring a completely new execution when encountering a new scenario or a new configuration. These non-adaptive solutions are therefore not suitable for the ISBJSSP setting, where the problem condition constantly changes, for example, due to machine
interruptions. To cope with potential dynamic changes, priority dispatching rules (PDRs), which are simple rule-based heuristics, are the most common approach used in modern manufacturing systems, as they can be applied instantaneously to an unknown instance. PDRs, first-in-first-out (FIFO) as an example, simply loads jobs based on some predetermined priority [6]. Although PDRs are widely used in real world situations due to their simplicity and speed, their performance varies widely depending on the problem condition. For example, shortest processing time (SPT) is a common benchmarking PDR that performs well in heavily loaded or congested job shop problem instances, but fails with low load levels [7]. These simple rules, although they can deal with dynamic changes, have poor generalizability and need to be manually selected or combined based on the job shop condition. Furthermore, with the random interruptions in the ISBJSSP formulation, where problem conditions change often, it is not clear apriother any of the PDRs can be effective on an ISBJSSP problem.
To improve the generalizability of dispatching rules, researchers have started to leverage artificial intelligence (AI) methods to solve job shop scheduling problems. To consider both the ability to adapt and to generalize, methods that are based on reinforcement learning (RL) are receiving increasing attention in the research community on planning problems. Much like PDRs, these methods output sequential decisions according to a dispatching policy. The difference is that rather than using predetermined priority rules, RL's dispatching policy is learned by observing and accumulating experience from previous simulations.
RL has been used to learn policies in various planning and scheduling problems. Traditional RL algorithms, such as Q-learning and its variants, are often used to learn dispatching rules for small-scale job scheduling problems with discrete state space [8, 9]. In contrast, large-scale job shop scheduling problems are relatively unexplored. For large-scale, continuous state space problems, it is necessary to consider deep RL methods that approximate the value function.
Sparked by increased availability in computational power in recent years, deep RL methods combining deep neural networks with RL have received much attention due to its powerful ability to generate solutions for continuous state space. Examples of deep RL applications to planning and scheduling problems include task scheduling in computing [10], robotic scheduling for manufacturing [11], and semiconductor manufacturing scheduling [12]. However, the problem formulation of deep RL for job shop problems varies widely, for even the classical JSSP. Liu et al. [13] used process time matrices to represent the state space and trained a deep RL model to select from a list of existing PDRs. Park et al. [14] and Zhang et al. [15] use disjunctive graph and graph representation learning to obtain a vectorized state space, and directly learned new dispatching rules. For ISBJSSP, there lacks a formal Markov Decision Process formulation that enables the study of deep RL approach for this new class of job scheduling problem.
This paper introduces a GNN-RL scheduler, combining graph neural network (GNN) and deep RL learning for ISBJSSP that considers dynamic interruptions to machine availability in job shop scheduling. The formulation of the ISBJSSP as a dynamic disjunctive graph and a Markov Decision Process (MDP) is formally presented. To generate training data sets and experimental test scenarios, we implement an ISBJSSP simulator, building upon a python-based JSSP simulator (pyjssp) previously developed by Park et al. [14]. The simulator is designed to simulate ISBJSSP instances to include blocking constraints, swapping conditions, and machine shutdown interruptions to mimic realistic concerns in practical applications. Using the simulator, GNN-RL models are trained with randomly generated ISBJSSP instances. In this study, the performance of the trained models are evaluated on two sets of benchmark scenarios. The first test set is a benchmark with \(10\times 10\) instances that are commonly used in job shop scheduling studies [16]. To demonstrate the scalability and generalization of the GNN-RL models, the second test set includes job shop instances with varying sizes [17]. The experimental results show that GNN-RL schedulers can be used to schedule unknown ISBJSSP instances robustly and efficiently and can potentially be applied in a real manufacturing environment without shutting down the entire job shop when interruption occurs.
The paper is organized as follows: Section 2 introduces the ISBJSSP formulation and how the problem can be modeled as a disjunctive graph and a Markov Decision Process. Section 3 describes the methodology employed to learn ISBJSSP scheduling models. Section 4 describes the experimental results obtained with benchmark ISBJSSP instances of different sizes. Section 5 concludes this paper with a brief summary and discussion.
## 2 Problem Formulation
This section briefly introduces the background for the job shop problem and the various constraints that exist. The modeling of ISBJSSP as a disjunctive graph and a Markov Decision Process (MDP) is then described.
### Job Shop Scheduling
For the classical JSSP of size \(m\times n\), there exists a set of \(n\) jobs, \(O\colon\{O_{1},O_{2},...,O_{n}\}\), to be optimally allocated and executed on a set of \(m\) machines, \(M\colon\{M_{1},M_{2},...,M_{m}\}\). Each job has a series of tasks or operations that must be processed according to the problem's precedence and machine-sharing constraints. In this study, without loss of generality, we assume each job, \(O_{j}\colon\{o_{1j},o_{2j},...,o_{pj}\}\), has the same number of \(p\) operations. Each operation \(o_{ij}\) has a pre-defined processing time for completion. The precedence constraint implies that for all consecutive operations \(o_{ij}\) and \(o_{i+1,j}\) of job \(O_{j}\), \(o_{ij}\) must be completed before starting \(o_{i+1,j}\). Furthermore, the machine-sharing constraint indicates that each operation \(o_{ij}\) must be processed uninterrupted on a dedicated machine. Additionally, each operation of a job is assigned on a different machine, and each machine that is in process of an operation is only freed when that operation finishes. Therefore, given the above constraints,
the number of machines \(m=p\), the number of operations for each job. The objective of the classical JSSP is to find a schedule that minimizes the makespan, the total time to finish all jobs [18].
The classical job shop problem assumes that there is sufficient storage or buffer space available to store each job in between consecutive operations. However, buffers are undesirable in practical applications. Therefore, many real manufacturing applications are better modeled as the Blocking JSSP (BJSSP) [19]. BJSSP introduces the blocking constraint in that no buffers are available for storing a job as the job moves between machines; the job must wait on the machine until it can be processed on the next machine. That is, for any job \(O_{j}\), its operation \(o_{lj}\) is a blocking operation until \(o_{lj}\)'s succeeding operation, \(o_{l+1,j}\), starts.
In practical job shops without buffers, parts that are blocking idling machines can be swapped to avoid a deadlock situation. A deadlock situation occurs when no unprocessed operations could be processed, because each unprocessed operation is waiting for a blocked machine to become "unblocked" and available. Thus, BJSSP often allows swapping to avoid deadlocks, referred to as the swap-allowed blocking job shop problem (SBJSSP) [16]. A swap can be done if there exists a set of blocking operations, each one waiting for a machine blocked by another operation in the set. A swap resolves the deadlock and ensures that the manufacturing process can proceed and that there exists at least one solution to a randomly generated SBJSSP instance.
### _Isbjssp_
Although the SBJSSP models can be applied to many manufacturing production lines, there is an additional factor that exists in a real production line but is often overlooked - the possibility of production interruption, for example, caused by machine failures. While solution methods such as mathematical programming, branch and bound search and meta-heuristic methods (such as tabu search, genetic algorithm, simulated annealing, etc.) can generate optimal solution to static (uninterrupted) job shop scenarios, the dynamic scenarios with real time machine interruptions would require recomputing a new solution for each scenario change by the methods. The possibility of such interruption also results in priority dispatching rules [20] being generally favored in practice, as dispatching rules can easily adapt to dynamic changes in availability of machines in real time. Our work includes this additional constraint where machine availability can be interrupted in the formulation: At any given time step, an idling machine in \(M\) has a probability \(P_{interrupt}\) that the machine is unavailable to process any job for a period of \(T_{interrupt}\) time steps. If a job's next operation is waiting on an unavailable machine, the job will block the machine used by the precedent operation due to the lack of buffer. When the shutdown machine becomes available again after \(T_{interrupt}\) time steps, the machine will then process one of the waiting jobs, determined by the job shop scheduler. We refer the job shop problem with the interruption constraint as the interrupting swap-allowed blocking job shop problem (ISBJSSP).
### _Dynamic Disjunctive Graph Formulation_
The ISBJSSP can be represented by a dynamic disjunctive graph \(G=(V,C\cup D)\)[21]. Here, \(V\) are the nodes, each corresponding to an operation \(o_{lj}\) of job \(O_{j}\). \(C\) is the set of conjunctive edges, where each edge connects two consecutive operations \(o_{lj}\) and \(o_{i+1,j}\) of job \(O_{j}\). The conjunctive edges represent the set of processing order constraints. \(D\) denotes the set of disjunctive edges, which connect any two vertices if the two corresponding operations need to be processed on the same machine. The disjunctive edges represent the machine-sharing constraints. Nodes and edges of the disjunctive graph, however, are subjected to deletions and additions due to the machine availability constraint, making the disjunctive graph dynamic. More specifically, when a machine is shutdown, the nodes (i.e., the operations) that need to be processed on the machine and their connected edges are temporarily removed from the graph, indicating that the machine is no longer observed at that time instance. When the machine becomes available after \(T_{interrupt}\) time steps, the previously removed nodes and edges are added back to the graph.
Figure 1 shows a disjunctive graph of a small example instance. The instance contains three machines, on which three jobs, each with three operations, are to be processed. As an example, the first job contains the operations labeled with node numbers 0, 1, 2, which must be processed in this specified order due to the existence of precedence constraints. The precedence constraint is shown as directed edges in the disjunctive graph. Similarly, the second job must be processed in the order of 3, 4, 5, and the third job in the order of 6, 7, 8. The bi-directional disjunctive edges specify machine constraints. In our example, operations 1, 5, 6 need to be processed on a dedicated machine. Similarly, operations 0, 3, 8 share a machine, and operations 2, 4, 7 share a machine. At the time where this disjunctive graph was plotted as shown in Figure 1, operations 0, 1, 3, 6 are completed, and operation 7 is being processed. Furthermore, there is swap-allowed blocking to consider. For example, even when operation 7 is completed, it will block its machine,
Figure 1: Example of a disjunctive graph for an instance containing three jobs, each with three operations. Directed conjunctive edges represent precedence constraints. Bidirectional disjunctive edges represent machine constraints, where the nodes in a cycle are operations that require to be processed on the same machine. Nodes with dashed perimeters indicate completed operations. Nodes with solid perimeters indicate operations that have not been started. The double-outlined node indicates the operation currently being processed.
preventing operations 2 and 4 to be processed, until the part moves to the next machine to commence operation 8. At the current time step, all three machines are either blocked by an unstarted operation or busy processing an operation, and are therefore not idle. As defined earlier, the machine shutdown interruption of probability \(P_{interrupt}\) only occurs to idling machines.
To incorporate the time-dependent job shop information into the disjunctive graph, we assign a node feature vector \(x_{v}\) to each node \(v=o_{ij}\in V\). The node features are stacked vectors with the following components:
* Node status: a one-hot index vector of size 3, indicating whether the operation \(v\) is not yet started, being processed, or is completed.
* Processing time: the total time required to finish operation \(v\).
* Degree of completion: the ratio of the accumulated processing time of \(v\)'s job to the total processing time of \(v\)'s job (i.e., the job that contains the operation).
* Number of succeeding operations: the number of operations including both \(v\) and the operations after \(v\) in \(v\)'s job.
* Waiting time: the time for which \(v\) must wait for processing after it is ready to be processed.
* Remaining time: the remaining processing time needed to complete the operation \(v\) once it has started.
Since node features \(x_{v}\) are time-dependent, the resulting graph is now dynamic and will be denoted as \(G_{t}\) hereon to represent the disjunctive graph of an ISBJSSP instance at time \(t\).
### Markov Decision Process Formulation
The scheduling process of an ISBJSSP instance can be viewed as a sequential decision-making process. Specifically, ISBJSSP can be formulated as an MDP, denoted as a \((S,A,P,R,Y)\) tuple, whose elements represent the set of Markov states \((S)\), set of actions \((A)\), transition model \((P)\), reward function \((R)\) and discount factor \((\gamma)\), respectively.
* State: A disjunctive graph \(G_{t}\) representing a snapshot of state \(s_{t}\in S\) of the ISBJSSP instance at time \(t\).
* Action: A scheduling action \(a_{t}\in A\) of loading an operation to an available machine at time \(t\).
* Transition model: The transition between states, which, in this study, is handled and generated by the job shop simulator.
* Reward function: A function defined to stipulate the behavior of an action. The reward function used in this study mimics the utilization of a machine and is defined as \[r_{t}=-n_{w_{t}}\] (1) where \(n_{w_{t}}\) is the number of jobs waiting at time \(t\).
* Discount factor: The discount factor for "caring" of future reward by an action.
## 3 Methodology
This section describes a machine learning approach for deriving a policy to solve the ISBJSSP. The method consists of two parts, graph neural network (GNN) and reinforcement learning (RL). Figure 2 depicts the overarching framework of the proposed GNN-RL approach.
Figure 2: Proposed GNN-RL framework
A disjunctive graph \(\ G_{t}\) (Figure 2(c)) is observed from the ISBJSSP simulator environment (Figure 2(d)) and is used as the input to a GNN model (Figure 2(f)) for representation learning. The learned embedded graph \(\ G_{t}^{\prime}\) (Figure 2(a)) is then used as the input to the RL algorithm, learning a parameterized policy, or a probability distribution of feasible actions, using an actor-critic model with proximal policy optimization (Figure 2(b,c)). Finally, an action to process a specific operation is sampled from the parameterized policy \(\pi(\cdot\ |\ G_{t})\) and executed via the ISBJSSP simulator (Figure 2(d)).
### Representation Learning with GNN
The process of obtaining an embedded graph using GNN can be thought of as learning an embedding vector for each node \(\nu\) that represents the necessary neighborhood structure and node feature information around the node. The embedded graph in Figure 2(a) can be learned from a GNN model. A GNN is a neural network that consists of layers of differentiable functions with learnable parameters and computes an embedding vector for each node in the graph. A GNN layer needs to be designed such that for each target node, the embedding of the target node is updated not only using the previous layer's target node embedding, but also node embedding aggregated from the neighboring nodes in order to represent the structure information of the disjunctive graph, \(\ G_{t}\), in the learned embeddings.
As shown in Figure 2(f), the computation process of a GNN layer implemented in this study can be separated into three steps. Firstly, a different multi-layer perceptron (MLP) network [22] with ReLU activation [23] is applied to each of the following three sets of nodes neighboring the target node \(\nu\): the set of all precedent nodes \(N_{p}(\nu)\) connected through the conjunctive (precedence constraint) edges, succeeding nodes \(N_{s}(\nu)\) connected also through the conjunctive edges and disjunctive nodes \(N_{d}(\nu)\) connected through the (bidirectional) disjunctive (machine-sharing constraint) edges. Secondly, the vector outputs of the three MLP networks, an aggregated representation of the overall graph, the node embedding updated in the previous layer, and the initial node feature of the target node are stacked in a vector, which, as the last step, is passed through another MLP network without activation. Mathematically, the operations of the \(k^{th}\) layer of a GNN can be written as:
\[h_{\nu}^{(k)}=f_{n}^{(k)}(\ ReLU(f_{p}^{(k)}(\sum_{i\in N_{p}( \nu)}h_{i}^{(k-1)}))\ ||\] \[\ ReLU(f_{s}^{(k)}(\sum_{i\in N_{s}(\nu)}h_{i}^{(k-1)}))\ ||\] \[\ ReLU(f_{d}^{(k)}(\sum_{i\in N_{d}(\nu)}h_{i}^{(k-1)}))\ || \tag{2}\] \[\ ReLU(\sum_{i\in V}h_{t}^{(k-1)})\ ||\] \[h_{\nu}^{(k-1)}\ ||\] \[h_{\nu}^{(0)})\]
where \(\ ReLU(\cdot)=\max(0,\cdot)\) is a non-linear activation function. The previously mentioned MLP networks are denoted by \(f_{p},f_{s},\) and \(f_{d}\), each computes a vector from node embeddings in the neighborhood of \(\nu\) (i.e., \(N_{p}(\nu),N_{s}(\nu),\) and \(N_{d}(\nu)\), respectively). \(V\) is the set of all nodes of the graph \(\ G_{t}=(V,C\cup D)\). \(h_{\nu}^{(0)}\) is the feature vector \(x_{\nu}\) of node \(\nu\). \(||\) is the vector concatenation operator.
After \(K\) GNN layers, we have computed an embedded graph \(\ G_{t}^{(K)}\), as shown in Figure 2(a), whose node features are now the updated embedding vectors \(h_{\nu}^{(K)}\ \forall\ \nu\in V\), from the input disjunctive graph \(\ G_{t}\) in Figure 2(e) with initial node features \(h_{\nu}^{(0)}\).
### Dispatching Policy Learning with RL
The graph embedding \(\ G_{t}^{(K)}\) outputted from GNN described in the previous section is used as the input for the RL algorithm. More specifically, an actor-critic method is used [24]. As the name suggests, there are two neural networks in the RL process: an actor and a critic. As shown in Figure 2(b), the actor \(\pi\big{(}a_{t}^{\nu}\big{|}G_{t}^{(K)}\big{)}\) maps the embedded graph \(\ G_{t}^{(K)}\) to the probability distribution over the set of all available actions, or the set of processible nodes. The actor model, used to compute the parameterized policy, is structured like a softmax function computing the probability of performing action \(a_{t}^{\nu}\) for the current state \(\ G_{t}^{(K)}\) as follows:
\[\pi\big{(}a_{t}^{\nu}\big{|}G_{t}^{(K)}\big{)}=\frac{\exp\big{(}f_{t}\big{(}h_ {\nu}^{(K)}\big{)}\big{)}}{\sum_{u\in A_{G}}\exp\big{(}f_{t}\big{(}h_{u}^{(K)} \big{)}\big{)}} \tag{3}\]
where \(a_{t}^{\nu}\) denotes the action of selecting operation (node) \(\nu\) to process and \(\nu\) is a processible node in the disjunctive graph's node set \(V\). Following the same notation in the previous section, \(h_{\nu}^{(K)}\) is the embedded vector of node \(\nu\), and \(f_{t}\ (l=p,s,d)\) denotes a MLP network. \(A_{G_{t}}\) represents the set of available actions for the disjunctive graph \(\ G_{t}\).
The critic model, as depicted in Figure 2(c), is another network that learns the value function to reliably optimize the policy. The current study approximates the critic function as
\[V^{\pi}\big{(}G_{t}^{(K)}\big{)}\approx f_{\nu}\big{(}\sum_{i\in V}h_{t}^{(K)} \big{)} \tag{4}\]
where \(f_{\nu}\) is an MLP network, and \(\sum_{i\in V}h_{t}^{(K)}\) returns a summation of all node embeddings.
It can be observed that the policy \(\pi\big{(}a_{t}^{\nu}\big{|}G_{t}^{(K)}\big{)}\) learned by the actor is now parameterized by a set of parameters \(\Theta=\{\theta_{\pi_{\theta}},\theta_{\alpha_{t}},\theta_{\nu},\theta_{\nu}, \theta_{\nu}\}\), corresponding to the MLP networks \(f_{p},f_{s},f_{u},f_{t},f_{t}\) and \(f_{v}\). The parameters can be iteratively updated via gradient ascent. In each training iteration, we use the policy \(\pi_{\Theta_{\mathit{old}}}\) with the current "old" parameters \(\Theta_{\mathit{old}}\) to interact with the job shop simulator (Figure 2(d)) and collect transition samples. The parameters \(\Theta\) are updated to optimize the policy:
\[\Theta=\Theta_{\mathit{old}}+\eta\nabla_{\Theta}L(\Theta) \tag{5}\]
where \(\eta\) is the learning rate and \(L(\Theta)\) denotes an objective function to be optimized for an optimal policy.
In this work, proximal policy optimization (PPO) is employed to optimize the policy. To prevent unstable training due to substantial policy changes and encourage exploration
during training, Schulman et.al. [24] proposes an objective function \(L(\Theta)\) to be optimized at each time step \(t\) as follows:
\[L_{t}(\Theta)=\mathbb{E}[L_{t}^{CLIP}(\Theta)-\alpha L_{t}^{VP}(\Theta)+\beta E_{ t}(\pi_{\Theta})] \tag{6}\]
where \(\alpha\) and \(\beta\) are parameters for the objective function. \(L_{t}^{CLIP}(\Theta),L_{t}^{VP}(\Theta)\), and \(E_{t}(\pi_{\Theta})\) are, respectively, a clipped-surrogate function, square-error value function loss, and an entropy bonus given as follows [20].
1. The clipped-surrogate function is defined as
\[L_{t}^{CLIP}(\Theta)=\mathbb{E}\left[\min(\rho_{t}\sigma_{t}a_{t},clip(\rho_{ t},1-\epsilon,1+\epsilon)\sigma_{t})\right] \tag{7}\]
where \(\rho_{t}\) denotes a probability ratio of the current and old policies as
\[\rho_{t}=\frac{\pi_{\Theta}\big{(}a_{t}\big{|}G_{t}^{(K)}\big{)}}{\pi_{\Theta _{old}}\big{(}a_{t}\big{|}G_{t}^{(K)}\big{)}} \tag{8}\]
and the estimator of the advantage function \(\sigma_{t}\) at time step \(t\) is computed as
\[\sigma_{t}=\delta_{t}+(\gamma\lambda)\delta_{t+1}+\cdots+(\gamma \lambda)^{T-t+1}\delta_{T-1} \tag{9}\] \[\text{and }\delta_{t}=\rho_{t}+\gamma V_{\Theta}\big{(}G_{t+1}^{(K)} \big{)}-V_{\Theta}\big{(}G_{t}^{(K)}\big{)} \tag{10}\]
The coefficients \(\gamma\) and \(\lambda\) are, respectively, the discount factor and the parameter for the advantage function estimator. The clip operation ensures that \(\rho_{t}\) does not move outside the interval \([1-\epsilon,1+\epsilon]\), thereby preventing substantial changes in policy.
2. The square-error value function loss is given as: \[L_{t}^{VF}(\Theta)=\big{(}V_{\Theta}\big{(}G_{t}^{(K)}\big{)}-V_{t}^{target} \big{)}^{2}\] (11) where \(V_{t}^{target}=\sum_{i=t}^{T}r_{i}\) denotes the sum of rewards.
3. The entropy bonus term for the current policy \(\pi_{\Theta}(\alpha)\) is introduced to ensure sufficient exploration and is defined as \[E_{t}(\pi_{\Theta})=-\sum_{\alpha}\log(\pi_{\Theta}(\alpha))\pi_{\Theta}(\alpha)\] (12) where \(a\) is an action in the current embedded graph \(G_{t}^{(K)}\).
The PPO procedure maximizes the objective function \(L(\Theta)\) by updating the parameters \(\Theta\) following the gradient direction \(\nabla_{\Theta}L(\Theta)\). Further discussion of the PPO algorithms can be found in References [24] and [14].
## 4 Experimental Results
This section describes the results of the experiments on a number of benchmark instances to evaluate the schedulers trained using GNN-RL. We will first describe the benchmark instances and details of the schedulers used in the experiments. We then report the experimental results, including: (1) a performance comparison of the GNN-RL method with other dispatching rules for the standard SBJSSP (a special case of the ISBJSSP with \(P_{interrupt}\) = 0); (2) a performance comparison demonstrating the practicability of the above methods for the ISBJSSP, which is subjected to random machine interruptions with probability \(P_{interrupt}\) > 0; and (3) a demonstration of the GNN-RL method's ability to generalize a model trained with instances of a specific size to handle ISBJSSP instances of different sizes.
### The Baseline Benchmark Problem Instances
As a baseline to evaluate and compare the GNN-RL methodology with the PDR methods, a set of 18 job shop scheduling problem instances, each being \(10\times 10\) in size (consisting of 10 machines and 10 jobs), are employed. Each job involves 10 operations and, depending on the benchmark instance, the operations may have different machine processing time. The 18 instances are commonly used as benchmarks for job shop scheduling [16]. Even though this study focuses on the ISBJSSP where machine interruptions may occur, the benchmark instances serve as a fair metric for evaluating scheduling efficacy between the GNN-RL method and the PDR methods.
### Scheduler Models and Configurations
**Priority Dispatching Rules (PDRs).** As mentioned before, PDRs [20] are the most common approaches employed in practice for generating immediate solutions for scheduling job shops with unseen instances. We therefore compare the makespans obtained by the GNN-RL schedulers with those obtained using the following PDRs for prioritizing the preference for job execution:
* Most Total Work Remaining (MTWR): the job that has the greatest number of remaining operations
* Least Total Work Remaining (LTWR): the job that has the fewest number of remaining operations
* Shortest Processing Time (SPT): the job whose next operation has the shortest processing time
* Longest Processing Time (LPT): the job whose next operation has the longest processing time
* First In First Out (FIFO): the first job that arrives
* Last In First Out (LIFO): the last job that arrives
* Shortest Queue Next Operation (SQNO): the job whose next operation requires a machine that has the fewest number of jobs waiting
* Longest Queue Next Operation (LQNO): the job whose next operation requires a machine that has the most number of jobs waiting
* Shortest Total Processing Time (STPT): the job with the shortest total processing time
* Longest Total Processing Time (LTPT): the job with the longest total processing time
* Random: the job that is randomly selected from the set of all doable jobs
The PDRs can be applied irrespective of whether machine interruptions occur during the job operations.
**GNN-RL Schedulers.** Initially targeted for the baseline benchmark instances of \(10\times 10\) in size, a random ISBJSSP instance of size \(m\sim\mathcal{U}(5,9)\times n\sim\mathcal{U}(m,9)\) and operation processing times from \(\mathcal{U}(1,99)\), where \(\mathcal{U}\) denotes a uniform distribution, is generated using the ISBJSSP simulator for training the GNN-RL models. The order of machines that each job visits is randomly permuted. After 20 episodes of training on the instance, a new ISBJSSP instance, once again with size \(m\sim\mathcal{U}(5,9)\times n\sim\mathcal{U}(m,9)\), processing times from \(\mathcal{U}(1,99)\) and randomly permuted machine order, is generated every 100 iterations. Note that, as discussed in a latter section, even though training is conducted on small-size instances to limit computational time and demand, the scheduling strategy that the
model learns can be transferred to solve instances of other sizes effectively. Algorithm 1 outlines the procedure to train a GNN-RL scheduler.
```
Generate a random ISBJSSP instance as starting state \(G_{0}\); Initialize parameters \(\Theta\) and the parameterized policy \(\pi_{\theta}\); Initialize iteration = 0; repeat iteration \(\leftarrow\) 1; for episode=1,2,..., T do Observe and collect transition sample \((G_{t-1},a_{t-1},r_{t-1},G_{t})\); Execute action \(a_{t}\)\(\leftarrow\)\(\pi_{\theta}\)\(\left(\cdot\left|G_{t}^{(R)}\right.\right)\) to assign operations to available machines; endfor Update parameters \(\Theta\) with gradient ascent to maximize \(L_{t}(\Theta)\), calculated from the collected transition samples; if iteration = 100 then Generate a new random ISBJSSP as starting state \(G_{0}\); Reset iteration = 0; endif until Validation performance has converged.
```
**Algorithm 1** Training procedure for GNN-RL scheduler
An Adam optimizer [25] with a learning rate (\(\eta\)) of \(2.5\times 10^{-4}\) is used. Two different discount factors (\(\gamma\)) are used in training, which are 0.9 and 1.0 (no discount) respectively. We use a GNN with \(K=3\) layers to obtain the graph embeddings. The MLP networks, namely \(f_{p}\), \(f_{s}\), \(f_{d}\), \(f_{m}\), \(f_{t}\) and \(f_{t}\), each consists of two hidden layers with 256 ReLU activation units. \(f_{p}\), \(f_{s}\) and \(f_{d}\) have 8-dimensional inputs and outputs. \(f_{n}\) has 48-dimensional input and 8-dimensional output. \(f_{t}\) and \(f_{v}\) have 8-dimensional inputs and scalar outputs. For the PPO hyperparameters, we set \(\lambda=0.95,\ \epsilon=0.2,\ \alpha=0.5,\ \text{and}\ \beta=0.01\), which are the same as proposed in [14]. For comparison purpose, models are trained without interruptions and with the possibility of machine interruptions. When trained on SBJSSP (without interruptions), we set the probability of interruption \(P_{interrupt}=0\). The trained GNN_RL models are used for a baseline comparison with the PDR methods. For models that are trained on ISBJSSP instances, we train a different model for each \(P_{interrupt}\) value, ranging from 1% to 20%. The trained GNN-RL models for the ISBJSSP are then used to compare with the SPT (shortest process time) priority rule (which achieves the shortest makespan among the PDRs for the non-interrupting SBJSSP) for the cases with machine interruptions. All experiments are conducted on a machine equipped with an Intel Core i7-7820X processor.
### Results on Non-interrupting SBJSSP
The goal of the job shop problem is to minimize the makespan, which is employed here as the evaluation criterion for comparing the performances of the different SBJSSP schedulers. Two GNN-RL schedulers, namely GNN-RL (1) with \(\gamma\) = 0.9 and GNN-RL (2) with \(\gamma=1.0\), are trained. Figure 3 shows the results of the two GNN-RL schedulers and the makespans obtained using the PDR schedulers for the 18 benchmark instances. Among the PDRs, the best scheduler appears to be problem dependent. As shown in Figure 3, on most of the benchmark instances, at least one of the GNN-RL schedulers is able to outperform or as competitive as the PDR schedulers, assuming no machine interruptions occur.
Table 1 reports the sum of the makespans of all problem instances. As shown, on average over all the benchmark instances, the GNN-RL schedulers produce shorter makespans than those by the PDRs schedulers. The GNN-RL(2) scheduler with \(\gamma\) = 1.0 has the best average performance. Among the PDR schedulers, the SPT strategy, which prioritizes the jobs according to the next operation having the shortest processing time, appears to perform the best on the average.
### Real-time Adaptive Scheduling of the ISBJSSP for the Baseline Benchmark
In practice, unforeseen interruptions could occur during production. For example, machines in a production line can misbehave unexpectedly at times that require a shutdown. To
\begin{table}
\begin{tabular}{l|c c} \hline \hline Scheduler name & Discount factor & Total makespan \\ \hline GNN-RL (1) & 0.9 & 27337 \\ GNN-RL (2) & 1.0 & **26856** \\ \hline MTWR & - & 28687 \\ LTWR & - & 27996 \\ SPT & - & 27827 \\ LPT & - & 29009 \\ FIFO & - & 28169 \\ LIFO & - & 28497 \\ SQNO & - & 28563 \\ LQNO & - & 28818 \\ STPT & - & 28439 \\ LTPT & - & 28845 \\ RANDOM & - & 28988 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Total makespan computed by the GNN-RL and PDR schedulers for the 18 benchmark instances, without machine interruptions
Figure 3: Makespans obtained using the two trained GNN-RL schedulers and the PDRs on the 18 benchmark instances without machine interruptions.
assess whether the GNN-RL method can cope with real-time changes, we simulate the scenarios where at any given time step, each idling machine, excluding those in the middle of processing a job, has a certain probability of failing or shutdown, denoted as \(P_{interrupt}\), for a duration of 50 time steps. During a simulated machine failure, no further job can be assigned to the machine. More specifically, a machine being shutdown is equivalent to removing the nodes utilizing that machine from the disjunctive graph representation for 50 time steps according to our problem formulation. The schedulers have no prior knowledge on the probability and the down time of the machines.
Three GNN-RL models are trained. They include the same two models, namely GNN-RL (1) and GNN-RL (2) trained without machine interruptions, as described in Section 4.3. The third model, GNN-RL (3), has the same hyperparameters as GNN-RL (2), but is trained with the same probability of interruption, \(P_{interrupt}\), as assigned to the simulation scenario.
We perform 50 simulations for each of the benchmark instances for a number of \(P_{interrupt}\) values, ranging from 1% to 20%. Among the PDR schedulers, SPT shows the best average performance for almost all the cases and is therefore employed here to compare with the GNN-RL schedulers. Figure 4 shows a comparison of the average results between the GNN-RL and the SPT schedulers. It can be seen that for most instances, the GNN-RL schedulers outperform or are as competitive as the SPT scheduler for \(P_{interrupt}<10\%\). Furthermore, the GNN-RL (2) model and the GNN-RL (3) model trained with interruptions perform consistently better than the GNN-RL (1) model. Moreover, Figure 5 plots the performance, averaged over the 18 benchmark instances, of each scheduler with respect to \(P_{interrupt}\). Also shown in Figure 5 are the means and standard deviations (Std) of the scheduling results for 50 randomly generated instances. Based on the results shown in Figures 4 and 5, the following can be observed:
1. As expected, when the probability of interruption for the machines increases, the makespans produced by the schedulers for completing all the jobs increase.
2. All GNN-RL models produce more efficient makespans than the SPT scheduler when the probability of machine interruption are lower than 5%. It can be seen from Figure 5 that, with interpolation, the GNN-RL models can potentially be effective up to 8-10% probability of machine interruptions. Beyond \(P_{interrupt}=10\%\), the SPT scheduler produces more efficient makespans in this case study.
3. It is interesting to observe that, for the set of benchmark instances tested, the GNN-RL (3) model trained with the same probability of interruption assigned to the simulator performs quite competitively for almost all cases.
4. As can be seen in Figure 5, when the probability of interruption becomes high (\(P_{interrupt}>10\%\)), the standard
Figure 4: Mean of makespans obtained using the GNN-RL and the SPT schedulers on the 18 baseline benchmark instances.
Figure 5: Total makespans of the GNN-RL and the SPT schedulers for different probabilities of interruption.
deviations for the GNN-RL schedulers are higher than the SPT scheduler. The higher standard deviation is probably due to the increase in uncertainties on machine interruptions that affect the predictability of the trained GNN-RL models.
In summary, based on the experimentation on the 18 benchmark instances, the GNN-RL schedulers are shown to be robust for the scenarios where the probability of interruptions for each machine is less than 10%, even when the GNN-RL model is trained based on the scenarios with no machine interruptions.
### Scheduling ISBJSSP Instances of Difference Sizes
To assess the scalability and generalization of GNN-RL models to instances of different sizes, we apply the same GNN-RL models trained previously with job shop instances of size \(m\sim\mathcal{U}(5,9)\times n\sim\mathcal{U}(m,9)\) to the 40 LA benchmark instances, with a range of sizes from 10\(\times\)5 to 30\(\times\)10 and 15\(\times\)15 [17]. Makespans are obtained for 50 simulations on each of benchmark instances. Figure 6 shows the makespans computed with the SPT scheduler, the GNN-RL (1) and (2) models trained without interruptions and the GNN-RL (3) models trained with interruptions. In general, especially for cases with \(P_{interrupt}<10\%\), on average, the GNN-RL schedulers perform better or at a similar level comparable to the SPT scheduler.
In summary, this experimental study with benchmark instances of different sizes shows that the GNN-RL methods remain robust for production scenarios with different job shop sizes even though the models are trained originally with the baseline benchmark of different size.
## 5 Summary and Discussion
The ability to assign jobs to machines under possible changes of operational conditions is important in practice. This study shows that GNN and RL can be a viable approach for solving the ISBJSSP, a complex and computational demanding problem subjected to unforeseen changes to the problem condition. We implemented a simulator to generate ISBJSSP instances for training and to validate the GNN-RL models for real-time scheduling of the ISBJSSP. For the simulations with no machine interruptions, the dispatching rule generated by the best trained GNN-RL scheduler achieves the best overall makespan exceeding that of the PDRs. (It should be noted that under perfect job shop conditions, mathematical optimization can produce more superior schedules [16].) As the key objective of this study, we simulate scenarios where machines in the job shop can possibly be interrupted and shut down temporarily. The results show that the GNN-RL trained schedulers are robust under interruptions and outperform the PDRs approaches when the probability of machine interruption is low (less than 10% in the examples). In practice, it is very unlikely that the job shop would remain operational when the machines are deemed to have a high probability for being shut down. Furthermore, with an emphasis on robustness and practicality, our experimental study shows that the GNN-RL method is able to generalize to different job shop sizes subjected to a range of interruption probabilities. Given the speed of outputting actions and its decent performance, the GNN-RL method represents a viable approach applicable to real manufacturing problems that can be closely modeled as a ISBJSSP. While our research utilizes random simulations, domain-specific knowledge should be strategically incorporated in the real production environment to, for example, build specialized reward functions and fine-tune hyperparameters.
Future work could focus on developing methodologies for fine-tuning the parameters of the GNN and RL models in order to further improve the scheduling results. Empirical studies of other learning, adaptive algorithms could be explored. Additional experiments could be conducted to demonstrate on machine interruptions in real world job shop environments. Finally, further investigation on the GNN-RL method may
Figure 6: Mean of the makespans obtained on the LA benchmarks.
include other domain-specific constraints, such as limited buffer capacity, queue time constraints and multi-line scheduling that are commonly encountered in semiconductor manufacturing.
## Acknowledgements
The research was partially supported by Samsung Electronics Co. Ltd., Agreement Number SPO-168006, and the US National Institute of Standards and Technology (NIST), Grant Number 70NANB22H098, awarded to Stanford University. The research has also been partially supported by the Stanford Center at the Incheon Global Campus (SCIGC), which is sponsored in part by the Ministry of Trade, Industry, and Energy of the Republic of Korea and managed by the Incheon Free Economic Zone Authority. Certain commercial systems are identified in this article. Such identification does not imply recommendation or endorsement by Samsung, NIST or SCIGC; nor does it imply that the products identified are necessarily the best available for the purpose. Further, any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of Samsung, NIST, SCIGC or any other supporting U.S. Government or corporate organizations.
|
2301.05739 | Eco-PiNN: A Physics-informed Neural Network for Eco-toll Estimation | The eco-toll estimation problem quantifies the expected environmental cost
(e.g., energy consumption, exhaust emissions) for a vehicle to travel along a
path. This problem is important for societal applications such as eco-routing,
which aims to find paths with the lowest exhaust emissions or energy need. The
challenges of this problem are three-fold: (1) the dependence of a vehicle's
eco-toll on its physical parameters; (2) the lack of access to data with
eco-toll information; and (3) the influence of contextual information (i.e. the
connections of adjacent segments in the path) on the eco-toll of road segments.
Prior work on eco-toll estimation has mostly relied on pure data-driven
approaches and has high estimation errors given the limited training data. To
address these limitations, we propose a novel Eco-toll estimation
Physics-informed Neural Network framework (Eco-PiNN) using three novel ideas,
namely, (1) a physics-informed decoder that integrates the physical laws of the
vehicle engine into the network, (2) an attention-based contextual information
encoder, and (3) a physics-informed regularization to reduce overfitting.
Experiments on real-world heavy-duty truck data show that the proposed method
can greatly improve the accuracy of eco-toll estimation compared with
state-of-the-art methods. | Yan Li, Mingzhou Yang, Matthew Eagon, Majid Farhadloo, Yiqun Xie, William F. Northrop, Shashi Shekhar | 2023-01-13T19:34:18Z | http://arxiv.org/abs/2301.05739v2 | # Eco-PiNN: A Physics-informed Neural Network for Eco-toll Estimation+
###### Abstract
The eco-toll estimation problem quantifies the expected environmental cost (e.g., energy consumption, exhaust emissions) for a vehicle to travel along a path. This problem is important for societal applications such as eco-routing, which aims to find paths with the lowest exhaust emissions or energy need. The challenges of this problem are threefold: (1) the dependence of a vehicle's eco-toll on its physical parameters; (2) the lack of access to data with eco-toll information; and (3) the influence of contextual information (i.e. the connections of adjacent segments in the path) on the eco-toll of road segments. Prior work on eco-toll estimation has mostly relied on pure data-driven approaches and has high estimation errors given the limited training data. To address these limitations, we propose a novel Eco-toll estimation Physics-informed Neural Network framework (Eco-PiNN) using three novel ideas, namely, (1) a physics-informed decoder that integrates the physical laws governing vehicle dynamics into the network, (2) an attention-based contextual information encoder, and (3) a physics-informed regularization to reduce overfitting. Experiments on real-world heavy-duty truck data show that the proposed method can greatly improve the accuracy of eco-toll estimation compared with state-of-the-art methods.
_Keywords: eco-toll estimation, physics-informed machine learning, spatiotemporal data mining_
## 1 Introduction
The development of on-board diagnostics (OBD) systems, which provide vehicle self-diagnosis and reporting capabilities, offers a transformative way to monitor the real-world functionality of vehicles. Using historical OBD attributes as training data, the eco-toll estimation (ETE) problem aims to quantify the expected environmental cost (e.g., energy consumption, fuel consumption, exhaust emissions, etc.) for a vehicle given a query path and a user-specified departure time. Figure 1 shows an example of the ETE problem with historical OBD data on four paths (path\({}_{1-4}\)) and one query composed of path\({}_{5}\), departure time, and the mass of a vehicle. This problem is of significant societal importance because it is an indispensable function of eco-routing, which aims to identify the most environmentally friendly travel route between two locations on a road network. Solving the ETE problem contributes to saving energy and mitigating transportation's impact on the environment and public health.
The challenges of this problem are three-fold. First, unlike common metrics of path selection such as distance and travel time, a vehicle's eco-toll is affected by the vehicle's physical parameters (e.g., vehicle weight, size, powertrain and power). Second, the paucity of available eco-toll data makes it challenging to develop accurate eco-toll estimation models. Most studies on eco-toll estimation models [13, 14] are conducted on data generated from vehicle simulators. These simulators require second-by-second vehicle velocity profiles as a key input. Due to high cost, however, most large scaled mobile sensors have low sampling rates in practice, which greatly limits the availability of data for testing and training [21]. Finally, the eco-toll on one road segment
Figure 1: An example of the ETE problem with training vehicle OBD data attributes and an ETE query.
is influenced by the contextual information (i.e. the connections of adjacent segments) of the path. For example, a vehicle on a highway will incur an extra eco-toll for acceleration if it just enters from an entrance ramp.
Most related work on eco-toll estimation is based on purely data-driven methods. For example, Huang and Peng proposed a Gaussian mixture regression model to predict energy consumption on individual road segments [14]. The U.S. National Renewable Energy Laboratory (NREL) proposed a lookup-table-based method, which lists energy consumption rate by category of road segments [13]. However, travel eco-toll is influenced by many physical vehicle parameters (e.g. mass, shape, drive cycle, velocity profile, etc.). Thus, these purely data-driven methods have met with limited success due to their large eco-toll data requirements and have high estimation errors given the limited training data. To produce physically consistent results, Li et al. introduced a physics-guided K-means model [18], however it only provides results on paths with historical OBD data.
Much research has been conducted on other travel metrics (e.g., travel time). For example, Fang et al. proposed a contextual spatial-temporal graph attention network (ConSTGAT) [7], which contains a graph attention mechanism to extract the joint relations of spatial and temporal information and uses convolutions over local windows to encode contextual information of road segments. However, they do not consider the influence of a vehicle's physical parameters on its eco-toll, and also require large amounts of training data. More details about the related work are in Appendix B.
In this work, we propose an eco-toll estimation physics-informed neural network (Eco-PiNN) framework to address the ETE problem. Our main contributions are as follows: (1) We propose a physics-informed decoder that integrates physical laws governing vehicle dynamics into Eco-PiNN. (2) We propose an attention-based contextual information encoder to capture a path's contextual information. (3) We introduce a physics-informed regularization (specifically, a jerk penalty) to guide the training of Eco-PiNN. (4) We conduct extensive experiments on real-world heavy-duty truck datasets, showing that Eco-PiNN outperforms the state-of-the-art models.
Purely data-driven machine learning (ML) models often suffer from limited success in scientific domains because of their large data requirements, and inability to produce physically consistent results [26]. Thus, research communities have begun to explore integrating scientific knowledge with ML in a synergistic manner. This kind of work is being pursued in diverse disciplines, such as climate science [6], biological sciences [1], etc. Our work represents the first effort to propose a model that leverages the physical laws of vehicle dynamics with neural networks to address the challenges in the ETE problem. As summarized in Table 1, the proposed model can also be generalized to estimation tasks on other application areas where spatial graphs are defined, such as predicting electric power losses on transmission lines in electricity grids.
In this paper, we only consider variables contained in existing OBD data. Other components which can influence the eco-toll but are either difficult to extract or are not typically found in OBD data (e.g., driver behavior, weather conditions, auxiliary power from HVAC system,, etc.) are not considered here. Computational complexity analysis is also outside the scope of this paper.
## 2 Preliminaries
### Notations and Definitions
Definition 1.: _A **road network** refers to a weighted directed graph (\(\mathcal{G}=(\mathcal{S},\mathcal{N})\)) modeling a road system in a study area, where \(\mathcal{S}\) is a road segment set and \(\mathcal{N}\) is a node set. Each \(s_{i}\in\mathcal{S}\) represents a road segment (e.g., \(s_{1}\) in Figure 1), and a node \(n_{i}\in\mathcal{N}\) represents a road intersection shared by segments (e.g., \(n_{1}\) in Figure 1)._
Definition 2.: _A **path** is a sequence of road segments (e.g., in Figure 1, \(path_{3}=[s_{10},s_{7},s_{3}]\)). The \(i\)th segment of the path is denoted by \(path(i)\) (e.g. \(path_{3}(1)=s_{10}\)). A **path length** is defined as the number of road segments in a path (e.g. \(length(path_{3})=3\)). A **sub-path** is a path that makes up a larger path in the graph (e.g., \([s_{10},s_{7}]\) is a subpath of \(path_{3}\))._
Definition 3.: _An eco-toll estimation (ETE) **query** is represented by a three-element tuple \(qry=(path_{i},t_{0},vp)\), where \(path_{i}\) is the query path, \(t_{0}\) is the departure time, and \(vp\) is the vehicle's physical parameters (e.g., Figure 1 shows a query on \(path_{5}\) of a 16-ton vehicle with departure time at 11am on Wednesday)._
Definition 4.: _On-board diagnostics (OBD) Attributes. Raw OBD data contain a collection of multi-attribute trajectories and the physical parameters of the corresponding vehicles. In this paper, each trajectory
\begin{table}
\begin{tabular}{c c} \hline Application Area & Example Use Cases \\ \hline Transportation & Eco-toll estimation \\ Electricity & Electricity grid loss estimation \\ Environment & River flow estimation \\ Computer Network & Internet traffic estimation \\ \hline \end{tabular}
\end{table}
Table 1: Use cases of proposed model.
is map-matched to a path in the road network associated with a few OBD attributes to train the model, namely, the departure time, vehicle mass, travel eco-toll and travel time on each road segment (e.g., Figure 1 shows OBD training data on four paths)._
### Problem Definition
The eco-toll estimation problem is defined as follows:
* **Input:** An ETE query \(qry\) composed of a query path in a road network, a departure time \(t_{0}\), and a vehicle's physical parameters \(vp\).
* **Output:** The estimated eco-toll of the query \(qry\).
* **Objective:** Minimize the estimation error.
* **Constraints:** In both training and testing datasets, the OBD data, including trajectories and physical parameters of vehicles, are drawn from the same distribution.
In this work, we use energy consumption as a proxy for eco-toll. Other measures of eco-toll (e.g. fuel consumption, exhaustion emissions, etc.) can also be calculated from the energy consumption given the vehicle physical parameters (e.g. fuel type) [4]. We only consider the variables within existing OBD data, so we assume there is no distribution shift between training and testing datasets.
## 3 Proposed Approach
Figure 2 illustrates the overall framework architecture of the proposed solution. First, a **data preprocessing** module processes features extracted from an ETE query and a road network, and represents the query path as a sequence of subpaths. Then we propose a novel **Eco-PiNN** framework to estimate the eco-toll on a segment given the representation of the corresponding sub-path. Finally, in the **postprocessing** stage, the eco-toll estimation of segments of the path are aggregated together to generate the ETE of the query. In this paper, we use the sum operation to do the aggregation.
### Preprocessing
The preprocessing module extracts and aggregates three different types of features from the ETE query.
**Road segment spatial proximity feature.** The relative geographic locations of road segments affect the traffic conditions on them, and hence the eco-toll. For example, vehicles on downtown road segments may consume more energy than those on rural road segments because of frequent stops and starts caused by traffic. Thus, we extract this spatial autocorrelation using a road segment spatial proximity extraction module. We first generate an edge-to-vertex dual graph \(\mathcal{L}(\mathcal{G})\) (also known as a line graph) of the original road network \(\mathcal{G}\), where a vertex of \(\mathcal{L}(\mathcal{G})\) represents a road segment and an edge of \(\mathcal{L}(\mathcal{G})\) represents a road intersection. Then we use a pre-trained NODE2VEC [11] model to represent each segment in a \(d\)-dimensional embedding space (denoted by Figure 3). After that, the nearby road segments in the road network are given similar representations.
**Categorical features.** We also extract seven categorical features, including five road network attributes: road type, lane number, bridge or not, and the starting and ending endpoint types; as well as two temporal features: departure day and departure time slot. Each categorical feature of a road segment is embedded [8] into a vector with a pre-defined size (i.e., embedding dimension), and these seven vectors are concatenated together as a \(cg\)-dimensional representation of the categorical features, where \(cg\) represents the sum of the embedding dimensions. All these embedding representations are initialized randomly and learned in the model training stage.
**Numerical features.** Six numerical features are extracted, namely, vehicle mass, speed limit, road length, turning angle to the next road segment in a path, direction angle, and elevation change. These numerical features are normalized and organized together as a vector of size \(num=6\). By concatenating all the features together, each road segment of the query path can be represented by a vector in \((d+cg+num)\) dimension.
**ETE query representation.** To address the challenge that the eco-toll on a segment is influenced by its adjacent segments in the path, the last preprocessing step represents the query path as a sequence of subpaths to capture the contextual information for each segment using a sliding window. Figure 3 shows an example that \(path_{4}\) is represented by four subpaths (\(subpath_{1-4}\)). Specifically, given the contextual window size \(w\), the subpath containing the contextual informa
Figure 2: Overall framework for estimating a vehicle’s eco-toll on a query path.
tion for the \(i\)th road segment in a query path is represented by: \([path(i-w),...,path(i),...,path(i+w)]\) (i.e. subpath length \(l=2w+1\)). For example, in Figure 3, the contextual information for \(s_{6}\) is represented by \(subpath_{2}=[s_{9},s_{6},s_{2}]\), given \(w=1\). We also implement subpath padding (using zero vectors) to ensure every subpath has the same length. For example, a padding will be added before \(s_{9}\) in \(subpath_{1}\) in Figure 3. Finally, we can represent the features of each subpath using a two-dimensional matrix, and each row of the matrix represents the features associated with a road segment, as shown by Figure 3.
### Eco-PiNN Architecture
As shown in Figure 4, the proposed Eco-PiNN architecture is composed of a **contextual information encoder** to encode the matrix representation of a subpath to a "pseudo" velocity profile of the middle road segment of the subpath, and a **physics-informed decoder** to decode the "pseudo" velocity profile into the ETE of that road segment. We also propose a novel **physics-informed jerk penalty regularization** to guide the training.
#### 3.2.1 Contextual Information Encoder
To capture the contextual information in Eco-PiNN, we estimate the eco-toll of a road segment by employing information about its adjacent segments in the given subpath. Attention mechanisms have been widely used to capture interdependence [23]. Thus, we design an attention-based encoder to learn the local dependency (i.e. how much attention should be given to different road segments in the subpath). Specifically, the architecture of this encoder is inspired by the encoder module of the Transformer model [23], which is composed of a multi-head self-attention mechanism and a fully connected feed-forward network. In detail, the input of the contextual information encoder is a subpath represented by \(X\in\mathbb{R}^{(2w+1)\times(d+cg+num)}\) that contains the contextual information for its middle road segment (i.e. the \((w+1)\)th segment): \(\mathbf{x}=row_{(w+1)}X\in\mathbb{R}^{(d+cg+num)}\). Then, the feature vector of the segment (i.e. \(\mathbf{x}\)) is taken as the \(query\) of the attention mechanism, and the feature matrix in the corresponding subpath (i.e. \(X\)) is taken as the packed \(keys\) and \(values\). The attention mechanism is formulated as1,
Footnote 1: In this paper, the head number is set to 1.
\[Q=\mathbf{x}M^{Q},K=XM^{K},V=XM^{V}, \tag{3.1}\] \[Attention=(\text{softmax}(\frac{QK^{\text{T}}}{\sqrt{d_{k}}})V)M ^{O}, \tag{3.2}\]
where \(M^{Q},M^{K},M^{V},M^{O}\in\mathbb{R}^{d_{k}\times d_{k}}\) are parameter matrices, and \(d_{k}=d+cg+num\) denotes the hidden size of the attention mechanism. Then, the contextual information of the middle road segment can be encoded as \(Attention\) by Equation 3.2. The encoded contextual information is then fed into a multi-layer perceptron with residual connections and layer-norm to estimate a "pseudo" velocity profile of the middle road segment of the subpath. Note that we use Softplus [9] as the activation function of the last layer of the encoder to avoid zero velocity estimation: \(\text{softplus}(x)=\log(1+e^{x})\).
#### 3.2.2 Physics-informed Decoder
Next, we decode the "pseudo" velocity profile into an eco-toll estimation using a series of eco-toll consumption equations. This physics-informed decoder thus integrates extra human knowledge (e.g. ETE equations) into the neural network, which enables the model to generate more accurate estimation when the training data are limited.
Equation (3.3) shows an example ETE equation which assumes the energy consumption of a vehicle has four parts, namely, the energy used for acceleration,
Figure 3: Preprocessing with road segment spatial proximity extraction and subpath representation using \(path_{4}\) as an example (subpath length \(l=3\)).
and that needed to overcome the gravitational potential energy change, the rolling resistance at the tires, and the air resistance [3].
\[W=\frac{m}{\eta}\int(av+gh+c_{rr}gv)dt+\int\frac{A}{2\eta}c_{air}\rho v^{3}dt, \tag{3.3}\]
where the energy consumption \(W\) is determined by the vehicle's motion properties (i.e., time (\(t\)), acceleration (\(a\)), velocity (\(v\)), and elevation change (\(h\))) as well as its physical parameters (i.e., mass (\(m\)), front surface area (\(A\)), air resistance coefficient (\(c_{air}\)), and powertrain system efficiency (\(\eta\))). Other symbols in the equation are constants, including gravitational constant \(g\), rolling friction coefficient \(c_{rr}\), and air density \(\rho\).
We begin by defining the "pseudo" velocity profile vector (denoted by \(\mathbf{v}\)) estimated by the contextual information encoder as a velocity profile on a road segment that is uniformly sampled over time. Then, under the assumption that the acceleration between velocity samples is uniform (which is reasonable when the length of a velocity profile vector (i.e. \(|\mathbf{v}|\)) is large and the travel time between every two velocity samples (denoted by \(\Delta t\)) is small), we can calculate \(\Delta t\) using \(\mathbf{v}\) and the length of the road segment \(length\) as follows,
\[\sum_{j=1}^{|\mathbf{v}|-1}\frac{\mathbf{v}(j)+\mathbf{v}(j+1)}{2}*\Delta t= length\]
\[\Rightarrow\Delta t=2*length/\sum_{j=1}^{|\mathbf{v}|-1}(\mathbf{v}(j)+ \mathbf{v}(j+1)). \tag{3.4}\]
For example, if \(\mathbf{v}=[1,2,3,2]\) (m/s) and the length of the road segment is 6.5 meters, then \(\sum_{j=1}^{|\mathbf{v}|-1}(\mathbf{v}(j)+\mathbf{v}(j+1))=13\), and \(\Delta t=1\)s.
Then, the acceleration profile can be represented by a vector \(\mathbf{a}\in\mathbb{R}^{|\mathbf{v}|}\), and the \(j\)th acceleration \(\mathbf{a}(j)\) can be calculated as,
\[\mathbf{a}(j)=\frac{\mathbf{v}(j+1)-\mathbf{v}(j-1)}{2\Delta t}. \tag{3.5}\]
Then, the power profile is represented by a vector \(\mathbf{p}\in\mathbb{R}^{|\mathbf{v}|}\), where \(\mathbf{p}(j)\), the power at the \(j\)th velocity reading in the velocity profile, can be calculated by
\[\mathbf{p}(j)=\frac{m}{\eta}(\mathbf{a}(j)\mathbf{v}(j)+gh+c_{rr}g\mathbf{v} (j))+\frac{A}{2\eta}c_{air}\rho\mathbf{v}^{3}(j). \tag{3.6}\]
Thus, if \(\Delta t\) is small, we can represent the integral in Equation (3.3) by the sum of the energy on each time interval which can be calculated by the average power of this time interval times \(\Delta t\):
\[\hat{W}=\sum_{j=1}^{|\mathbf{v}|-1}\Delta t*\frac{\mathbf{p}(j)+\mathbf{p}(j+ 1)}{2}. \tag{3.7}\]
**Postprocessing.** The estimated energy consumption of the whole query path can be calculated by summing all the energy estimations of the road segments in the path together.
### Training Eco-PiNN with Jerk Penalty Regularization and Physics-informed Multitask Learning
In the training stage, we introduce a jerk penalty as a regularization to make the estimated "pseudo" velocity profiles more similar to real-world velocity profiles using physics knowledge. We also use a physics-informed multitask learning mechanism to leverages travel time data to guide the training of the Eco-PiNN and to prevent over-fitting. In a multitask learning mechanism, information is shared across tasks, so the labeled data in all the tasks is aggregated to obtain a more accurate predictor for each task [27].
Equation (3.8) is the loss function we use to train the proposed framework. It is a weighted sum of three parts, namely the prediction errors of eco-toll \(L_{e}\), that of travel time \(L_{t}\), and a physics-informed jerk penalty \(L_{jerk}\).
\[L=\omega_{e}L_{e}+\omega_{t}L_{t}+\omega_{jerk}L_{jerk}. \tag{3.8}\]
Inspired by [7], we define the prediction errors of eco-toll \(L_{e}\) and travel time \(L_{t}\) as a combination of
Figure 4: Eco-PiNN architecture. Specifically, \(\text{softplus}(x)=\log(1+e^{x})\). The loss function is detailed in Sec 3.3.
the errors on each road segment and those on the whole path. Specifically, we use the Huber loss [15] to represent the error on each road segment, since this loss can help to alleviate the impact of the outliers. Given the predicted and true eco-toll \(\hat{W}\) and \(W\) on a road segment, the prediction error of the eco-toll on the road segment \(L_{seg,e}\) is calculated as follows.
\[L_{seg,e}=\left\{\begin{array}{cc}\frac{1}{2}(\hat{W}-W)^{2}&|\hat{W}-W|< \delta\\ \delta(|\hat{W}-W|-\frac{1}{2}\delta)&otherwise\end{array},\right. \tag{3.9}\]
where \(\delta\) is a hyperparameter to define prediction outliers. Then we use the mean absolute percentage error (MAPE) to represent the error on the whole path. Given the predicted and true eco-toll \(\hat{W}_{path}\) and \(W_{path}\) on each path, the prediction error of the eco-toll on a group of paths \(L_{path,e}\) is calculated as follows.
\[L_{path,e}=average(\frac{|\hat{W}_{path}^{(k)}-W_{path}^{(k)}|}{W_{path}^{(k) }}). \tag{3.10}\]
Thus, the prediction error of eco-toll \(L_{e}\) is the sum of \(L_{seg,e}\) and \(L_{path,e}\):
\[L_{e}=L_{path,e}+\frac{1}{n_{path}}\sum_{k=1}^{n_{path}}(\frac{1}{n_{seg}^{(k )}}\sum_{i=1}^{n_{seg}^{(k)}}L_{seg,e}^{(k,i)}), \tag{3.11}\]
where \(n_{path}\) is the number of paths, \(n_{seg}^{(k)}\) is the number of segments in the \(k\)th path, and \(L_{seg,e}^{(k,i)}\) is the prediction error on the \(i\)th road segment on the \(k\)th path.
The estimated travel time on a road segment \(\hat{t}\) is calculated by \(\hat{t}=(|\mathbf{v}|-1)\cdot\Delta t\). We can get the prediction error of travel time \(L_{t}\) by replacing the predicted and true eco-toll with predicted and true travel time in Equations (3.9) to (3.11). The travel time estimation for a path is the sum of the time estimation of the segments in the path.
**Jerk penalty.** In addition to prediction errors, we introduce a jerk penalty to minimize the jerk of the predicted velocity profiles. Jerk is defined as the first time derivative of acceleration. Jerk minimization has been widely used to model driving behavior, with the goal of avoiding high jerk rates that can be uncomfortable to vehicle occupants [12, 22]. The jerk penalty also serves as a regularization of Eco-PiNN to reduce overfitting. We define the jerk penalty \(L_{jerk}\) as the mean of the square of the jerk on each road segment:
\[L_{jerk}=\frac{1}{n_{path}}\sum_{k=1}^{n_{path}}\frac{1}{n_{seg}^{(k)}}\sum_ {i=1}^{n_{seg}^{(k)}}\sum_{j=1}^{|V|}(\mathbf{jerk}^{(k,i)}(j))^{2}, \tag{3.12}\]
where \(\mathbf{jerk}^{(k,i)}(j)\) is the jerk at the \(j\)th velocity reading in the velocity profile on the \(i\)th road segment of the \(k\)th path, and it is calculated as the derivative of the acceleration:
\[\mathbf{jerk}(j)=\frac{\mathbf{a}(j+1)-\mathbf{a}(j-1)}{2\Delta t}. \tag{3.13}\]
## 4 Evaluation
**Experiment Goals:** We validated Eco-PiNN with (i) a _comparative analysis_ to compare the prediction accuracy against several strong baseline methods, (ii) _ablation studies_ to evaluate the contributions of the physics-informed decoder, jerk penalty, contextual information and multitask learning, and (iii) a _sensitivity analysis_ to evaluate the impact of key parameters (e.g. the weight of jerk penalty).
### Experiment Design
#### 4.1.1 Data
The historical OBD dataset was collected by the Murphy Engine Research Laboratory of the University of Minnesota. It recorded 1343 trips for four diesel trucks in Minnesota operating from Aug. 10th 2020 to Feb. 13th, 2021. The statistical information of these data are detailed in Appendix A. We divided one day equally into six time slots, and represented the timestamp of entering the road segment by the corresponding time slot.
We generated the testing data for our experiments by randomly selecting 20% of the 1343 vehicle trips. Testing data never changed throughout the experiments, and it contained both travel time and fuel consumption data. To ensure a robust evaluation, the remaining 80% of the trip data was randomly divided ten different times in ratios of 60% training and 20% validation data. Since fuel consumption data is often limited in real world settings, we simulated this challenge by assuming that only a small percentage (e.g., 5%) of trips (randomly sampled) in any training or validation dataset contained the corresponding ground truth fuel consumption data. As noted earlier, the testing data always contained both travel time and fuel information, as shown in Figure 5. Then, we generated datasets containing pairs of eco-toll-estimation (ETE) queries and corresponding travel time and fuel consumption (fuel consumption may have no value). Specifically, each query of all configurations of training/validation datasets corresponded to a sub-trip whose path length
Figure 5: Description of how the datasets were split
was 20, and the step between two sub-trips was set to 5. From the same testing data, different testing datasets were generated based on different settings of the query's path length (from 1 to 200). For each of the ten configurations of our training and validation datasets, we trained the model and tested it using the testing datasets. Finally, we calculated the mean and standard deviation of the estimation error on each testing dataset.
#### 4.1.2 Hyperparameter Settings
The embedding size of the NODE2VEC representation of each road segment (\(d\)) was 32. The walk length was 20. The context size was 10. The number of walks to sample for each node was 10. The p and q parameters in NODE2VEC were set to 1. The number of negative samples used for each positive sample in NODE2VEC was also 1. The embedding size of the road type and the endpoints type was 4. The embedding size of other categorical features, including starting time, the day of the week, lane number, and bridges, was 2. Thus, the dimension of the aggregated features was 58 (i.e. \(d+cg+num=58\)). The context window size was \(w=1\). The output size of the first fully-connected (FC) layer after the attention mechanism was 32. The output size of the second layer was 58, which equaled the dimension of the aggregated features for the residual connection and layer normalization. After that, the output size of the final linear layer in the encoder was 60 (i.e., \(|\mathbf{v}|=60\)), and the weights for different loss functions were: \(\omega_{e}=0.2\), \(\omega_{t}=0.8\) and \(\omega_{jerk}=1e^{-6}\). We used the Adam optimization algorithm [16] to train the parameters with learning rate: \(1e^{-4}\) and batch size 512. The parameters were set through a grid search. We used an early stopping mechanism to avoid over-fitting: training was terminated if the model performance stopped improving on the validation set for ten training epochs, after which the best performing model was saved.
#### 4.1.3 Approaches for Comparison
Using mean absolute percentage error (MAPE) as the metric, we compared the prediction accuracy of Eco-PiNN 2 against three baseline methods
Footnote 2: Our code: [https://github.com/yang-mingzhou/Eco-PiNN](https://github.com/yang-mingzhou/Eco-PiNN)
(1) The National Renewable Energy Laboratory (**NREL**) lookup-table method [13]. Google Maps claimed that they used the energy estimation models developed by NREL in their recently launched eco-routing function [10], so we treated this method as a state-of-the-art energy consumption estimation model. It aggregates road segments based on their features and creates a look-up table using the average fuel consumption rate on the aggregated road segments. We used the numerical and categorical features described in Section 3.1 to generate the look-up table, and the bin widths for the numerical features were as follows: _mass_: 10000kg; _speed limit_: 10 km/h; _road length_: 100m; _turning angle to the next road segment in a path_: 45 degree; _direction angle_: 45 degree; and _elevation change_: 10 m. The fuel consumption rate on unseen road segments was represented by the average fuel rate of its nearest neighbour bin measured by the Euclidean distance.
(2) **ConSTGAT**[7]. We needed to learn whether the state-of-the-art travel time estimation models will work in an ETE task if physical features are added into the training data. Thus, we implemented ConSTGAT using the same features described in Section 3.1. We treated it as a state-of-the-art travel time eatimation method because it had been deployed in production at Baidu Maps, and successfully served real-world requests [7]. For those parameters that were not mentioned in [7], we used the similar parameter settings as used by Eco-PiNN, as well as the same early stopping method.
(3) **CI Encoder+FC**. To verify whether integrating the physics laws with the neural network improves the performance of Eco-PiNN, we developed a model named Contextual Information Encoder + FC (**CI Encoder+FC**) to conduct an ablation test. This model first encodes the contextual information using the same encoder as Eco-PiNN, and decodes the velocity profile to ETE using a fully-connect layer.
### Comparative Analysis
To show how well the methods perform with different amounts of eco-toll information, we tested two settings. In the first setting, 5% of the queries in the training and validation datasets had corresponding energy consumption data. The results are shown in Table 2. As can be seen, Eco-PiNN significantly outperformed the baseline methods with all path lengths, especially when the path length was small. For example, when the path length was 1, the Eco-PiNN model was about 20% more accurate than the state-of-the-art eco-toll estimation model (NREL). When the path length was 200, the Eco-PiNN model was still 3% more accurate than the baseline methods. In the second setting, in Table 3, the percentage of queries in the training/validation datasets that had corresponding energy consumption data was 20%. In this case, given more training data, the accuracy of all methods improved, and Eco-PiNN still outperformed than all baseline methods. In conclusion, it is reasonable to say that Eco-PiNN significantly outperforms the state-of-the-art methods.
### Ablation Studies and Sensitivity Analysis
In this section, we evaluate the contribution of the proposed neural network components on accuracy improvement. In sensitiviy analysis, 5% of queries in the training and validation datasets had corresponding energy consumption information.
**Physics-informed decoder** The contribution of the physics-informed decoder can be analyzed by comparing the performance of Eco-PiNN with that of CI Encoder+FC in both tables. In the first setting shown in Table 2, Eco-PiNN significantly outperformed CI Encoder+FC (e.g. 18% more accurate when path length was 1) because of the integration of physics laws. In the second setting in Table 3, given more training data, the accuracy of CI Encoder+FC also improved, and Eco-PiNN still outperformed it even though the accuracy difference between them decreased. Nevertheless, the standard deviation of Eco-PiNN under this setting was significantly less than that of CI Encoder+FC, which shows that incorporating the physics laws also improves the stability of the model. In conclusion, it is reasonable to say that the integration of physics laws in Eco-PiNN improves the performance and stability.
**Jerk penalty** To analyze the effect of the proposed jerk penalty, we fixed \(\omega_{e}=0.2\) and \(w=1\) and varied the weight of the jerk penalty in the loss function \(\omega_{jerk}\) from 0 to \(10^{-4}\). When \(\omega_{jerk}=0\), the penalty does not affect model training, so the contribution of the penalty can be revealed by comparing the prediction accuracy of the model with \(\omega_{jerk}>0\) and that with \(\omega_{jerk}=0\). The results are shown in Figure 6. The comparison between the MAPE with \(\omega_{jerk}=10^{-6}\) and that with \(\omega_{jerk}=0\) indicates that the jerk penalty component helps improve the estimation accuracy, and the improvement increases with longer paths.
**Multitask learning** To analyze the effect of the proposed multitask learning component, we fixed \(\omega_{jerk}=1e-6\) and \(w=1\) and varied the multitask learning weights (i.e., \(\omega_{e}\) and \(\omega_{t}\)), where \(\omega_{t}=1-\omega_{e}\). We varied \(\omega_{e}\) from 0 to 1. When \(\omega_{e}=1\), the multitask learning component degenerates to an eco-toll estimation task, so the effect of the multitask learning component can be evaluated by comparing the accuracy when \(\omega_{e}<1\) against that when \(\omega_{e}=1\). The results are shown in Figure 7. The comparison between the MAPE with \(\omega_{e}=0.2\) and that with \(\omega_{e}=1\) indicates that the multitask learning component helps to improve Eco-PiNN performance, and the improvement increases with increasing path length.
**Window size.** We also analyzed the effect of the contextual window size by setting \(w\) as 0, 1, and 2 and fixed \(\omega_{e}=0.2\) and \(\omega_{jerk}=1\). When \(w=0\), no contextual information is considered. The results are shown in Figure 8. By comparing the MAPE when \(w=0\) with that when \(w=1\), we can see that leveraging the contextual information helped improve the accuracy.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{6}{c}{MAPE: Mean (Standard deviation)} \\ \hline Path length & 1 & 10 & 20 & 50 & 100 & 200 \\ \hline NREL & 96.45(6.31) & 28.68(3.07) & 24.03(2.92) & 20.41(3.30) & 19.39(3.65) & 18.83(3.93) \\ ConSTGAT & 136.45(8.04) & 27.51(1.44) & 23.39(0.90) & 20.55(0.89) & 19.94(1.74) & 20.01(2.81) \\ CI Encoder+FC & 91.34(6.99) & 25.30(0.86) & 21.95(0.80) & 19.67(0.93) & 19.06(1.31) & 18.81(2.31) \\ Eco-PiNN & **73.70**(2.37) & **21.74**(1.26) & **18.50**(1.36) & **15.83**(1.72) & **15.13**(1.68) & **15.78**(1.79) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Prediction accuracy when 5% of training/validation data contained energy consumption.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{6}{c}{MAPE: Mean (Standard deviation)} \\ \hline Path length & 1 & 10 & 20 & 50 & 100 & 200 \\ \hline NREL & 92.20(3.14) & 26.80(0.93) & 22.38(0.89) & 18.87(0.96) & 18.35(1.08) & 18.43(1.56) \\ ConSTGAT & 110.01(6.76) & 23.37(0.55) & 19.85(0.56) & 17.47(0.90) & 17.22(1.21) & 18.07(1.41) \\ CI Encoder+FC & 77.27(2.60) & 21.18(0.55) & 18.22(0.71) & 15.89(1.14) & 15.20(1.62) & 15.23(1.82) \\ Eco-PiNN & **70.29(0.89)** & **20.56(0.18)** & **17.34(0.19)** & **14.68(0.22)** & **14.12(0.30)** & **14.86(0.64)** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Prediction accuracy when 20% of training/validation data contained energy consumption.
Figure 6: Effect of jerk penalty.
## 5 Conclusion
The eco-toll estimation problem quantifies the environmental cost for a vehicle to travel along a path. This problem is of significant importance to society and the environment. In this work, we propose a novel Eco-toll estimation Physics-informed Neural Network (Eco-PiNN) framework that integrates the physical laws governing vehicle dynamics with a deep neural network. Our experiments on real-world vehicle data show that Eco-PiNN yields significantly more accurate eco-toll estimation than state-of-the-art methods. In the future, we plan to generate synthetic datasets to analyze the generalization, computational complexity, and sample complexity of Eco-PiNN. We also plan to model the influence of other components (e.g., weather conditions) on eco-toll to further improve its estimation accuracy.
## Acknowledgment
This material is based upon work supported by the National Science Foundation under Grants No. 1901099, 2147195, and the USDOE Office of Energy Efficiency and Renewable Energy under FOA No. DE-FOA-0002044. We also thank Kim Koffolt and the Spatial Computing Research Group for valuable comments and refinements.
## Appendix A On-Board Diagnostics (OBD) and Road Network Data
Raw OBD data contain a collection of hundreds-of-attributes trajectories and the physical parameters of the corresponding vehicles. Each multi-attribute trajectory records a vehicle's status along a trip and is in the form of a sequence of spatial points, each of which is associated with the vehicle's instantaneous status such as location, exhaust gas emission rate, engine temperature, cumulative energy consumption, and speed.
The OBD dataset used in this work recorded 1343 trips for four diesel trucks in Minnesota operating from Aug. 10th 2020 to Feb. 13th, 2021 at 1Hz resolution. On average, each trip contained 89.97 road segments. For each road segment, the average length was 608.2157 meters; the average fuel consumption was 0.213421 liter; the average travel time was 28.04122 seconds. In this work, we represented the fuel consumption in the unit of 10 ml (e.g. the average fuel consumption was 21.3421*10ml) so that the prediction errors of the eco-toll task and that of the travel time task were within the same order of magnitude. The mass of the trucks varied with different trips, averaging 23257.71 kg with a standard deviation of 7844.85 kg. Each truck had 10 wheels, with a wheel radius of 0.5003 m. The size of the front area was 10.5 m\({}^{2}\). Engine efficiency was 0.56. The relation between energy consumption and diesel fuel consumption was 38.6MJ (i.e. 10.6KWh) of thermal energy for 1 liter of diesel [4].
The road network we used was from OpenStreetMap [19] of Minneapolis with 728548 road segments and 280217 intersections. To calculate the elevation change of each road segment, we used the Esri elevation service [20] to capture the elevation of each road intersection in the map (using the mean elevation in a 10m \(\times\) 10m cell).
## Appendix B Broadly Related Work
### Eco-toll estimation
Macroscopic models of eco-toll estimation (e.g., MOVES [17]) are used to estimate network-wide eco-toll inventories according to aggregated network parameters. By contrast, microscopic models [2, 24] estimate a vehicle's instantaneous eco-toll according to physical laws (e.g., classical mechanics and vehicle combustion reaction models) using the vehicle's velocity profile and physical parameters (e.g., mass and front surface area), as well as some extra information, such as energy source (e.g. diesel or gas). However, because precise velocity profiles are hard to predict due to uncertainty in traffic, microscopic models are mainly used in retrospective research.
Mesoscopic eco-toll estimation models use the properties of road segments such as average speed and road length as the explanatory variables, and the eco-toll on each road segment as the dependent variable. For example, the National Renewable Energy Laboratory (NREL) proposed a lookup-table-based method, which lists energy consumption rate by category of road segments [13]. Huang and Peng proposed a Gaussian mixture regression model to predict energy consumption on individual road segments [14]. Li et al. introduced
Figure 8: Effect of contextual window size.
Figure 7: Effect of multitask learning component.
a physics-guided K-means model that works on paths with historical data [18]. However, most mesoscopic models are purely data-driven models, which require large amounts of eco-toll data. Also, their results may not be consistent with physical laws and lead to poor generalizability.
### Travel time estimation
Our comparative experiments showed that the proposed PiNN model predicts energy consumption more accurately than ConSTGAT, a state-of-the-art travel time estimation (TTE) model. TTE, also known as estimated time of arrival (ETA), aims to estimate a vehicle's travel time for a given path and departure time. Research on the TTE problem mainly focuses on estimating traffic conditions by extracting the spatial-temporal information and the contextual information of a path [25, 5, 7]. TTE models cannot be applied to estimate the environmental cost of a vehicle's travel because: 1) TTE models ignore the physical parameters that affect a vehicle's fuel efficiency, since travel time is mainly affected by traffic conditions. 2) TTE models can be trained by large scale travel time data extracted from Global Positioning System (GPS) trajectory data using the logs of location-based service applications, such as DiDi Chuxing [25], Baidu Maps [7], and Google Maps [5]. By contrast, the eco-toll data can only be extracted from historical OBD data, or simulated by second-by-second vehicle trajectory data; both of which are limited. The limited availability of such data makes eco toll estimation significantly more challenging than travel time estimation.
|
2310.04366 | Swordfish: A Framework for Evaluating Deep Neural Network-based
Basecalling using Computation-In-Memory with Non-Ideal Memristors | Basecalling, an essential step in many genome analysis studies, relies on
large Deep Neural Networks (DNNs) to achieve high accuracy. Unfortunately,
these DNNs are computationally slow and inefficient, leading to considerable
delays and resource constraints in the sequence analysis process. A
Computation-In-Memory (CIM) architecture using memristors can significantly
accelerate the performance of DNNs. However, inherent device non-idealities and
architectural limitations of such designs can greatly degrade the basecalling
accuracy, which is critical for accurate genome analysis. To facilitate the
adoption of memristor-based CIM designs for basecalling, it is important to (1)
conduct a comprehensive analysis of potential CIM architectures and (2) develop
effective strategies for mitigating the possible adverse effects of inherent
device non-idealities and architectural limitations.
This paper proposes Swordfish, a novel hardware/software co-design framework
that can effectively address the two aforementioned issues. Swordfish
incorporates seven circuit and device restrictions or non-idealities from
characterized real memristor-based chips. Swordfish leverages various
hardware/software co-design solutions to mitigate the basecalling accuracy loss
due to such non-idealities. To demonstrate the effectiveness of Swordfish, we
take Bonito, the state-of-the-art (i.e., accurate and fast), open-source
basecaller as a case study. Our experimental results using Sword-fish show that
a CIM architecture can realistically accelerate Bonito for a wide range of real
datasets by an average of 25.7x, with an accuracy loss of 6.01%. | Taha Shahroodi, Gagandeep Singh, Mahdi Zahedi, Haiyu Mao, Joel Lindegger, Can Firtina, Stephan Wong, Onur Mutlu, Said Hamdioui | 2023-10-06T16:37:03Z | http://arxiv.org/abs/2310.04366v2 | # Swordfish: A Framework for Evaluating
###### Abstract.
_Basecalling_, an essential step in many genome analysis studies, relies on large Deep Neural Networks (DNNs) to achieve high accuracy. Unfortunately, these DNNs are computationally slow and inefficient, leading to considerable delays and resource constraints in the sequence analysis process. A Computation-In-Memory (CIM) architecture using memristors can significantly accelerate the performance of DNNs. However, inherent device non-idealities and architectural limitations of such designs can greatly degrade the basecalling accuracy, which is critical for accurate genome analysis. To facilitate the adoption of memristor-based CIM designs for basecalling, it is important to (1) conduct a comprehensive analysis of potential CIM architectures and (2) develop effective strategies for mitigating the possible adverse effects of inherent device non-idealities and architectural limitations.
This paper proposes Swordfish, a novel hardware/software co-design framework that can effectively address the two aforementioned issues. Swordfish incorporates seven circuit and device restrictions or non-idealities from characterized real memristor-based chips. Swordfish leverages various hardware/software co-design solutions to mitigate the basecalling accuracy loss due to such non-idealities. To demonstrate the effectiveness of Swordfish, we take Bonito, the state-of-the-art (i.e., accurate and fast), open-source basecaller as a case study. Our experimental results using Swordfish show that a CIM architecture can realistically accelerate Bonito for a wide range of real datasets by an average of 25.7\(\times\), with an accuracy loss of 6.01%.
+
Footnote †: 1}\)TU Delft
+
Footnote †: 2}\)ETH Zurich
+
Footnote †: 2}\)ETH Zurich
## 1. Introduction
_Basecalling_ is the first computational step required to translate noisy electrical signals generated by modern sequencing machines to strings of DNA nucleotide bases (i.e., [A, C, G, T)], also known as DNA reads or simply reads [6, 12, 53, 60, 98, 107, 127, 131, 133]. The accuracy of basecalling directly affects the overall accuracy and the computational effort (in terms of required algorithms and their complexity and runtimes) of subsequent genome analysis steps. The speed of basecalling also determines how fast one can run through all computational steps of a genomic study [107, 120, 134]. Therefore, accurate and fast basecalling is critical for advancing genomic studies that hold the key to unlocking the potential of precision medicine, facilitating virus surveillance, and driving advancements in healthcare and science [5, 6, 7, 13, 14, 15, 28, 29, 34, 41, 42, 62, 67, 84, 87, 103, 137, 142].
Current state-of-the-art (SotA) basecallers leverage Deep Neural Networks (DNNs) to achieve high accuracy [31, 96, 105, 120, 140, 149]. However, SotA DNN-based basecallers encounter different shortcomings when implemented using different approaches. Specifically, DNN-based basecaller designs on Central Processing Units (CPUs) and Graphics Processing Units (GPUs) face multiple major shortcomings: (1) they are computationally intensive and slow [107, 120, 134], (2) they require extensive data movement between the processor and memory [16, 17, 79], and (3) they are limited by the use of costly hardware, such as expensive SRAM memories that require 6 transistors for storing only 1 bit of information [30, 102]. When implemented on a hardware accelerator, these DNN-based basecallers face two other limitations: (1) They rely on costly floating-point (FP) computations, which place high demands on the required system's memory bandwidth and compute units with FP capability. This makes hardware acceleration difficult due to the large number and size of neural network model parameters. (2) They use costly Machine Learning (ML) techniques such as skip connections1[96, 123, 140], leading to added computation, memory, and storage overheads (e.g., to store the activation parameters that are fed to the last layers of the NN) [120]. Therefore, over the past decade, both industry and academia [27, 68, 101, 111, 115, 119] have explored the use of Computation-In-Memory (CIM)2 using memristor-based devices to accelerate DNNs.
Footnote 1: Skip connection is an ML technique that allows skipping a few neural network layers and forwarding the output to the input of a layer further ahead.
Footnote 2: Interchangeably, also referred to as Processing-In-Memory (PIM) [85].
This growing interest in using CIM for resolving the shortcomings of DNNs is driven by two main factors: (1) the potential of the CIM paradigm to process data where it resides to reduce the large performance and energy overheads of data movement and (2) the analog operational properties of these nanoscale emerging technologies (e.g., memristors) that intrinsically support efficient Vector-Matrix-Multiplication (VMM), multiple of which are used to implement a Matrix-Matrix-Multiplication (MMM) that is the most dominant operation in DNNs. However, the memristor-based CIM solutions for basecalling can greatly degrade the DNN inference accuracy due to (1) the limited quantization levels supported by memristor devices [27, 111] and (2) non-idealities of memristive devices and circuits used to adopt memristor-based memory arrays, such as sneak paths [48, 118] and the non-linearity of peripheral circuitry [58, 83, 147]. To propose viable solutions for accelerating the large-scale DNN-based basecallers, these aspects must be considered at all computing stack layers, i.e., application, architecture, and device. Such considerations are only possible with a framework capable of evaluating the impact of the non-idealities in memristor-based CIM architecture on the end-to-end basecalling accuracy.
This framework should also be able to account for the overhead that the solutions to overcome the accuracy loss may bring.
To this end, we propose _Swordfish_, a modular and extensible hardware/software co-design framework that allows us to (1) evaluate the impact of memristor non-idealities and CIM limitations on the accuracy and performance of basecalling and (2) investigate potential mitigation techniques and measure their effect on accuracy for each non-ideality (**Contribution #1**). Swordfish is used to investigate the acceleration of basecalling via emerging computing paradigms and technologies. Specifically, with Swordfish, we comprehensively investigate the potential of accurate acceleration of a SotA basecaller (Bonito) on a SotA CIM architecture (PUMA [(9)]) by accounting for the non-idealities of the underlying devices and technologies of the underlying architecture, for the first time (**Contribution #2**). Swordfish integrates real-world applications with multiple critical comparison metrics, distinct mitigation strategies to tackle the challenges of novel hardware, and comprehensive real measurements to guide the modeling of memristors. Our evaluations using Swordfish show that on a wide range of real genome datasets, PUMA accelerates Bonito, a SotA basecaller, by an average of 25.7\(\times\) realistically (i.e., the average throughput improvement is 25.7\(\times\) when we consider essential mitigation techniques to prevent huge accuracy loss). This performance still comes at the cost of a 6.01% accuracy loss (Section 5). Our evaluations also yield several key suggestions and recommendations for DNN, hardware, and system designers of future emerging accelerators with memristors for DNN-based basecallers and other applications that have two most important metrics (e.g., accuracy and performance) to consider in their evaluation (**Contribution #3**). Specifically, our investigation using Swordfish results in multiple unique insights: (1) Our results challenge the prevalent assumption that DNN-based applications will automatically succeed on memristor-based CIM due to inherent redundancy in large neural networks, (2) combining mitigation techniques at only one abstraction level (e.g., circuit or system level) does not necessarily improve the accuracy loss as they can potentially go against each other, and (3) combining multiple mitigation techniques at the circuit and system levels can offset the accuracy loss induced by non-idealities significantly.
## 2. Background and Motivation
This section briefly discusses the necessary background and motivation for this work. We refer the reader to comprehensive reviews [(85; 6; 18; 45; 98)] for more details.
### Genome Sequencing Pipeline
The genome sequencing pipeline consists of computational steps we employ to acquire genome sequences as strings of DNA characters (i.e., {A, C, G, T}) [(127; 6; 131; 60; 98; 107; 127; 133)] for subsequent analysis in bioinformatics, e.g., cell type identification, identification of marker genes, and variant detection.
Although, currently, the most available data and tools in the genomics realm are for short reads [(20; 39)] (mainly produced by Illumina sequencers), working with highly accurate long genome sequences is generally favorable as they reduce the computational cost of reconstructing the genome. For this reason, there is a large momentum towards accurate long-read sequencing [(6)]. Our work focuses on finding solutions and analysis tools that target long reads while also not discarding tools (e.g., GenAx [(39)] and GenASM [(20)]), designed for short reads. A leading method for long-read sequencing is the nanopore sequencing technology. Nanopore sequencers [(90; 93; 94)] translate raw signal squiggles into bases (A, C, G, T) using complex neural networks. Today, Oxford Nanopore Technologies (ONT) is a company that produces the most commonly used sequences based on Nanopore technology.
Fig. 1 illustrates the nanopore genome sequencing pipeline [(107)] and the placement and execution time breakdown of each of its steps. We use SotA tools for each step and run the tool on the datasets described in Section 4.
We make two main observations. First, basecalling is the first computational step in the pipeline. Second, basecalling dominates the execution time of a single run in the pipeline. These steps make up more than 40% of the entire execution time. Our empirical observation aligns with those in prior works [(107; 33; 81)].
### Basecalling
Basecalling is responsible for converting raw electrical signals produced by a nanopore sequencer to digital genome symbols, i.e., [A, C, G, T] [(127; 12; 53; 60)]. Recent works [(92; 95; 96; 134)] heavily investigate the use of DNNs for basecalling as they can provide high accuracy than Hidden Markov Model (HMM) based techniques [(91)].
There are generally two approaches for improving the accuracy and/or performance of a basecaller: 1) software-based and 2) hardware-based. Software-based methods propose new algorithms (e.g., DNNs [(95; 140; 96)] instead of HMMs [(91)]) or faster and/or smaller DNN architectures [(120; 140)]. Hardware-based approaches propose various hardware platforms for the target algorithm (i.e., DNN or HMM) to improve performance with (hopefully) small impact on accuracy [(81; 120)].
We observe four main shortcomings in SotA basecallers, which limit their execution time and/or hardware acceleration:
* SotA basecallers are slow and energy inefficient. For example, Guppy basecalls 3 Giga basepairs (Gbps) in \(\sim\)6 hours while a following step in the genomics pipeline, such as read mapping using minimap2 [(71)] takes only \(\sim\)0.11 hours [(120)].
* SotA basecallers use DNN models with costly skip connections [(123)]. For example, Bonito needs an additional \(\sim\)21% of model parameters (along with associated memory and storage overheads) for skip connections and requires additional computation on them. Note that a skip connection permits bypassing certain layers within the neural network, transmitting the output of one layer as the input to subsequent layers [(123)]. These connections are costly because they (1) typically force the network to perform additional computation, for example, to match the channel sizes, (2) incur extra memory and storage overhead, as they require storing the activation parameters that are fed to the later layers [(16; 17)], and (3) incur additional off-chip data movement overhead when these networks are run on conventional processor-centric hardware platforms, like CPUs and GPUs.
Figure 1. Overview of the nanopore genome sequencing pipeline and execution time breakdown of different steps.
* [leftmargin=*]
* SotA basecallers exploit 32-bit floating point precision for their model parameters [96, 134, 140]. This effectively increases (1) the required bandwidth and processing units, e.g., with FP compute capability, and (2) inefficiency in the hardware realization of the underlying models.
* SotA basecallers incur expensive data movement between the computation units and the memory units [79, 81, 120].
We emphasize that 40% of execution time spent on basecalling (Section 2.1), the first and arguably most critical step in the pipeline, is significant and worth accelerating. Today's best basecallers often underperform on SotA systems, generating bottlenecks. A potentially 40% decrease in genome analysis runtime implies a proportional reduction in power and energy, which is critical considering the extensive data and computational demands of modern genome analysis systems. Therefore, optimizing basecalling contributes greatly to improving the efficiency and sustainability of the genomics pipeline.
### Memristor-based CIM and Associated Non-Idealities
Resistive memories or memristive devices, such as ReRAM, PCM, and STT-MRAM [59, 69, 119, 132], have recently been introduced as suitable candidates for both storage and computation units that can efficiently perform vector-matrix multiplication [138] and logical bulk bit-wise operations [26, 113, 114, 139, 73], as they can follow Kirchhoff's law inherently [121]. Therefore, many recent works [9, 26, 27, 111, 112, 139, 143, 144, 145] exploit these devices in their CIM architectures. Memristor devices also enjoy non-volatility, high-density, and near-zero standby power [139, 73, 11].
A typical memristor-based memory crossbar capable of VMM and other logical operations is shown in Fig. 2[9, 26, 27, 111, 139] alongside its possible non-idealities.
This memristor-based structure can suffer from at least four types of non-idealities or variations that can eventually affect the results of the enabled VMM operation, i.e., lead to errors in the VMM result: (1) The non-ideal digital to analog converter (DAC), due to the effective resistive load (known as \(R_{Load}\)) in its circuit [55], (2) Variation of synaptic conductance, which includes both imperfect programming operation (commonly known as write variations) and the process variation that exist in memristors [4, 23, 70, 148], (3) The wire resistance and sneak paths, due to imperfect wires (i.e., wires with different resistances) and the changes in the voltages of the internal nodes while performing a VMM operation [56, 148], and (4) non-ideal sensing circuit or analog to digital converters (ADCs), due to rigid or hard-to-accurately-change references used for distinguishing/sensing the end result [55, 144]. Our work focuses on these specific non-idealities inherent to memristor technologies in a CIM architecture. While we do not explicitly address other circuit challenges and non-idealities, we acknowledge their presence and the existing solutions developed to mitigate them in electronic systems. For example, crosstalk [140, 129, 130], which involves interference between adjacent circuit traces or wires, can indeed lead to data corruption and compromise information integrity. However, we focus on the specific non-idealities relevant to our hardware architecture, not crosstalk. Note that industry-standard techniques, such as shielding and layout design, decoupling components, ground and power distribution, signal timing and margins, ECC and scrubbing, isolation and shielding, and crosstalk-aware clock distribution, have been extensively studied and developed to mitigate crosstalk issues. We assume that similar techniques can be applied to address any potential crosstalk concerns in memristor-based CIM systems.
Recent works [9, 111, 19, 27, 86] report impressive performance and energy improvements for DNN models executed on memristor-based CIM architectures, mainly assuming idealized underlying hardware. Moreover, DNNs are known to be resilient to some noise [125, 44, 126, 128, 44, 66]. However, since memristor-based CIM architectures are indeed non-ideal and the resiliency of DNNs has a limit, to decide whether or not these platforms are indeed suitable for realizing our DNN-based basecaller, one needs to evaluate the impact of these non-idealities on the end-to-end application accuracy and account for the overhead that the solutions to overcome the accuracy loss may bring. Such a framework is missing among prior works and is a contribution of our work (Section 3).
### Programmable Inference Architecture
PUMA (Programmable Ultra-efficient Memristor-based Accelerator) [9, 10, 11] is a complete set of (micro)architecture, simulator, and compiler that supports the execution of many ML applications, using memristor crossbars enhanced with general-purpose execution units. PUMA uses a spatial architecture and provides the necessary programmability and generality to execute a wide range of ML-based applications on memristor-based crossbars. For evaluations in Swordfish, we assume an PUMA-based architecture for two reasons. First, PUMA supports all the necessary types of NN layers in basecallers: CNN, LSTM, and linear. This is especially handy for our main target basecaller, Bonito. Second, the architecture, simulator, and compiler are open-sourced [10, 11] and well-documented for an extension, unlike many other rich architectures.
## 3. Swordfish Framework
Swordfish is a framework designed to guide the evaluation of CIM designs for DNN-based basecallers.
### Swordfish Overview
Fig. 3 presents an overview of the Swordfish framework. Swordfish consists of 4 key modules:
* _Partition & Map_ module that partitions and maps the Vector-Matrix-Multiplication (VMM) operations of the target DNN-based basecaller to the underlying CIM platform,
Figure 2. Overview of memristor-based crossbar arrays and possible non-idealities.
* _VMM Model Generator_ module that generates an end-to-end model for possible non-idealities and errors of a VMM operation considering the underlying technology in the CIM design,
* _Accuracy Enhancer_ module that implements online and offline mitigation techniques to counter accuracy loss, and
* _System Evaluator_ module that analyzes the accuracy and throughput of basecaller while also providing an area overhead.
We emphasize that the accuracy analysis in the System Evaluator module is critical and unlike evaluations of conventional platforms, e.g., Field-Programmable Gate Arrays (FPGAs) or GPUs. Its importance stems from the abundance of the underlying non-idealities, variations, limitations, and hardware perturbations of the emerging hardware paradigms (Sundhi et al., 2017). From now on, we refer to the proposed framework as _Swordfish_ and the actual implemented memristor-based CIM design for our target basecaller Bonito as _SwordfishAccel_.
### Partition & Map
To run the DNN of a basecaller on a CIM architecture, one should map each of the VMM operations in the target DNN to the analog memory arrays and the rest of the operations to the digital peripheral circuitry. The Partition & Map module takes care of this task in Swordfish by dividing individual functions of the basecaller into the analog or digital components of the underlying architecture. This process is required one time for every basecaller and has two steps.
In the first step, Swordfish decides which memory crossbars will perform each VMM operation of each layer. For Bonito basecaller, Swordfish decides which memory crossbars handle the VMM of the first convolutional layer and which crossbars are responsible for the VMMs of the following LSTM and linear layers. Swordfish assumes that all the underlying crossbars have the same size and readout peripheral circuitry (e.g., ADCs).
In the second step, Swordfish decides how it maps the weights to each crossbar. Swordfish supports different programming/writing techniques for memristor devices, such as write-read-verify (WRV) and Set/Reset pulse programming.
In mapping and evaluation, Swordfish makes the following widely common design choices:
* The input streams into the first layer of DNN. Swordfish does not divide the input into chunks and leaves this task to the host. Doing so helps Swordfish to evaluate the maximum throughput of a basecaller (Swordfish, 2017; Dosov et al., 2018), independently of the input size.
* The next layer starts its computation as soon as the previous layer of the basecaller produces enough values. This is also a common assumption for evaluating the maximum possible throughput of a DNN in simulation (Bou et al., 2019; Dosov et al., 2018).
* Multiple crossbar arrays can be simultaneously active and perform the necessary operations (VMM and other operations necessary for the target DNN, such as activation. This assumption ensures that full chip utilization is not limited due to power constraints. One can consider this parallelism to be analogous to the concurrent activation of multiple subarrays in different banks and bank groups in traditional DRAM (Swordfish, 2017; Dosov et al., 2018; Dosov et al., 2018).
* Swordfish optimizes its design decisions for the highest achievable accuracy, throughput, and memory utilization in the stated order. This is a common priority order for optimizations in basecaller's (Swordfish, 2017; Dosov et al., 2018; Dosov et al., 2018).
### VMM Model Generator
VMM Model Generator is responsible for generating the non-ideal output per each VMM required by the basecaller. VMM Model Generator differentiates between constraints and non-idealities. This is essential in a CIM design where non-idealities or constraints do not necessarily lead to a loss in the accuracy of the application. To model the effect of these constraints and non-idealities on the accuracy of an application, Swordfish considers them at the lowest-level building block where they aggregate, i.e., where their results merge. In a memristor-based CIM architecture for a DNN-based basecaller, such an effective place to consider the effects of constraints and non-idealities is the VMM operation output. Therefore, the VMM Model Generator in Swordfish focuses on assessing the effects of each factor on a VMM operation, while our evaluations and analyses assess the end-to-end basecalling metric.
This module takes three types of inputs. First, it takes the results of the previous module (i.e., \(\bigcirc\) Partition & Map in Fig. 3) to determine the size of the VMM. Second, it takes the circuit and device description (i.e., constraints and non-idealities) that can affect accuracy. Examples inputs in this category are (1) the level of quantization, (2) the circuit variations (e.g., in inputs (e.g., DACs), wires, and outputs (e.g., ADCs) device), and (3) device variations. Third, it takes the weights of the target basecaller, which can be provided directly by the user or the Accuracy Enhancer module that applies multiple training mechanisms (Section 3.4). The module outputs the non-ideal output vector per each input vector and weight matrix (i.e., the expected vector result for a VMM).
Swordfish supports two different approaches for modeling a VMM. The first approach is to use a pre-calculated library of measurements on actual devices. The second approach is to use an analytical model (e.g., a fast crossbar model (FCM) (Sundhi et al., 2017)). Section 5 evaluates these approaches separately.
In the first approach, Swordfish queries a library that, for a given array size and input vector, returns an output vector randomly chosen from many (\(\geq 10^{4}\)) possible outputs based on measurements on an actual crossbar with the same dimensions as the length of the active input vector. The measurements in the library already contain all the possible non-idealities in the target VMM operation, i.e., non-idealities that may arise from DACs, ADCs, circuits, and devices in the crossbar. One can build this library by measuring multiple tiles several times. For each of these measurements, one should program the initial values of memristors within a tile with the weight values of the target DNN to be evaluated on Swordfish. In this paper, the distinct initial resistance states are based on the Bonito basecaller (Dosov et al., 2018). The random choice from the library aims to account for variations and non-idealities among different memristor-based tiles, which can arise from different initial values of each memristor device and/or manufacturing differences. By integrating real
Figure 3. Overview of Swordfish framework.
measurements and accounting for tile-to-tile differences, we believe our methods accurately reflect non-ideality distribution in practical settings. Although this approach accurately represents the VMM operation considering many possible non-idealities, it lacks the flexibility of separately studying or measuring the effects of each possible error due to different non-idealities. This approach is also limited to the crossbar configurations (for example, crossbars of 64x64 and 256x256) to whose measurements one has access (Section 4).
In the second approach, Swordfish utilizes existing analytical models that are available for ADCs, DACs, and variation profiles of the underlying devices in the crossbar. Fig. 4 illustrates the steps Swordfish uses in its VMM Model Generator for this approach.
In Fig. 4, Swordfish applies the analytical model for a non-ideal DAC model () to the input vector of the VMM operation () and obtains the non-ideal input voltages as the output vector (). Swordfish then applies this new vector to a crossbar with an updated non-ideal weight matrix (), where non-idealities have been applied to the original weight matrix (from the VMM operation) based on the expected variations of each cell, which are usually obtained based on generic characterization of memristor-based crossbar arrays, i.e., without any peripheral circuitry or target weights specific to a particular DNN. The output is considered a non-ideal output current () that Swordfish applies to a model of non-ideal ADC () and obtains the output vector (), an output vector that might contain some errors.
Fig. 5 presents an overview of how Swordfish models the crossbar non-idealities for the second approach (i.e., the analytical model in the VMM Model Generator module) ( in Fig. 4). For this, Swordfish first takes the crossbar instances ( in Fig. 5) from the Partition & Map module. Swordfish considers these crossbar instances as separate matrices with digital weights (). Then, Swordfish uses a non-linear model for the synaptic device states () to map the weight matrices of digital weights into ideal corresponding conductance matrices (). After that, Swordfish applies to these metrics the synaptic variations for the crossbar () that are determined from an analytical model based on the estimated behavior of memristor devices within a crossbar array. The output consists of the same number of matrices, but now with adjusted weights (). Swordfish finally applies to those matrices the profile of all known circuit-level non-idealities () by adding representative metrics for these non-idealities. The output consists of matrices accounting for all variations and non-idealities ().
### Accuracy Enhancer
Since accuracy is a critical metric in basecalling, Swordfish applies several mitigation techniques to deal with the non-idealities and their induced errors on the VMM and/or basecalling. More specifically, Swordfish supports four different accuracy enhancement techniques: (1) analytical variation-aware training (VAT) (offline), (2) knowledge distillation (KD) training, (3) read-verify-write (R-V-W) training, and (4) random sparse adaptation (RSA) retraining (online).
#### 3.4.1. Analytical Variation-Aware Offline Training
Swordfish supports variation-aware training (VAT) [24, 63, 78, 80] during the training of a target DNN as the simplest method to enhance the accuracy loss due to (1) quantization and (2) possible resistance variations per weight, which can be analytically or experimentally measured. Existing works randomly inject faults into the weights of the DNN [38], or model the potential errors at the end of each layer [38, 80]. Similarly, Swordfish utilizes the crossbar characterization for the errors per VMM (i.e., the error library in the first approach in VMM Model Generator) or an analytical crossbar model for the errors per VMM (i.e., as in the second approach in VMM Model Generator). Swordfish injects the modeled errors in the training and considers the rest of the devices unaltered. Swordfish repeats this process for each VMM and every layer and then retrains the basecaller network. This way, Swordfish ensures that its retraining yields a better estimate for the errors arising from non-idealities in the crossbar.
#### 3.4.2. Knowledge Distillation-based Variation-Aware Training
In addition to offline VAT based on injecting random errors or potential errors per layer discussed in Section 3.4.1, Swordfish is capable of supporting the knowledge distillation (KD) approach as a VAT as well, i.e., Swordfish exploits knowledge/weights that exist in an ideal (typically a FP32-based) basecaller baseline to guide the training of SwordfishAccel, our memristor-based CIM design for Bonito. In KD, two models exist: (1) the teacher (an ideal implementation using high precision data format, e.g., FP32-bit format) and (2) the student (SwordfishAccel quantized to 16-bit-width fixed-point presentation for both weights and activations). The goal is to mimic the teacher's output in the student by minimizing a loss function where the target is the result of applying the softmax on the quantile function associated with the standard logistic distribution (i.e., logit) of the teacher's training [47]. We refer the reader to previous works on KD [22, 47] for further detail on how a loss function can be implemented in such a system to minimize the difference of SwordfishAccel's output and the teacher model's softmax output.
Figure 4. An overview of the VMM Model Generator’s second approach: using analytical models.
Figure 5. An overview of modeling crossbar non-idealities in Swordfish.
#### 3.4.3. Read-Verify-Write (R-V-W) Training
Read-Verify-Write (R-V-W) is a conventional error mitigation technique for non-ideal memristor-based memories that provides cell-by-cell error compensation. R-V-W is used in open-loop-off-device (OLD) [77] where R-V-W programming and sensing loop help the actual resistance of the device to converge to the expected target resistance. This method involves many read-and-write operations and feedback control for memristors, making R-V-W a slow technique to mitigate accuracy loss. Note that to improve the accuracy in R-V-W, we need to increase the fraction of the retrained weights (memristor devices in our case), increasing the cost of the mitigation technique.
#### 3.4.4. Random Sparse Adaptation Online Retraining
Swordfish uses random sparse adaptation (RSA) [22] to map the learned DNN model to SwordfishAccel. RSA is used to mitigate the performance overhead of R-V-W technique [49, 77]. RSA by itself prevents only some of the non-idealities from being materialized as inaccuracies and can be an offline mechanism. However, SwordfishAccel combines it with an online training mechanism.
For its online retraining using RSA, Swordfish places a small on-chip SRAM-based memory next to memristor-based crossbars and distributes the learned DNN model (i.e., weights) between this SRAM and memristor-based crossbars. The key idea Swordfish uses is to map the weights that otherwise would map to error-prone memristor devices to reliable SRAM cells. If one has access to the exact profile of the underlying memristor-based memory crossbars, one can exploit the knowledge on which memristors and columns are more error-prone and use this knowledge to decide which weight to map into the crossbar and which one to the SRAM. In our evaluations of Swordfish, we use this knowledge whenever we use the chip measurements already used in the first approach of the VMM Model Generator. However, Swordfish can also randomly choose memristor devices in the crossbar and map (i.e., hardware) them to the SRAM. Random choice is the next best option without knowledge about the exact error pattern of a memristor-based crossbar. We used this method whenever we used the second approach (i.e., analytical model) in the VMM Model Generator (Section 3.3).
Fig. 6 presents how SwordfishAccel adopts RSA with an online retraining mechanism (e.g., KD) in a three-step approach:
1. In the first step (), SwordfishAccel trains the original Bonito and loads the initial weights from the Bonito DNN model into the assigned memristor crossbar and the SRAM (). SwordfishAccel considers this model as the initial model for the student in KD.
2. In the second step (), SwordfishAccel performs a VMM operation as usual. However, whenever one or more of the assigned weights to SRAM (i.e., error-prone memristors or randomly chosen ones in Swordfish) is involved, SwordfishAccel reads the value from the SRAM memory instead of the memristor device. Swordfish does this by passing the inputs of corresponding devices through the SRAM value instead of the crossbar, zeroing the input for that particular memristor in the crossbar, and then summing up the values of both paths ().
3. In the third step (), SwordfishAccel returns the results of the VMM operation of each crossbar () to the retraining component (KD in our example in Fig. 6) and performs online training on only the weights that are mapped to the SRAM memory to improve the accuracy loss due to non-idealities. Note that SwordfishAccel considers the non-ideality models of crossbars, ADCs, and DACs to the student model for every training batch and trains the student. This includes both the initial training in Step () and retraining in Step ().
4. SwordfishAccel then loads the new weights to the SRAM near the crossbars () and repeats Steps () and (). SwordfishAccel uses KD-based variation aware training for its Step () in Fig. 6 online retraining. However, any other retraining method can also replace KD in our example. Note that all the parameters are already quantized to 16-bit fixed-point precision to present the model in SwordfishAccel accurately. Swordfish leverages the weights from the converged teacher model to improve the convergence of the student model.
RSA in Swordfish comes at the price of extra area overhead for the considered on-chip SRAM memory, storage in the memory controller for mapping metadata, summation of the output from the crossbar with on-chip memory, and some additional control logic evaluated in Section 5.
### System Evaluator
The System Evaluator module puts the results of all previous modules of Swordfish together to evaluate the target DNN.
As inputs, this module takes the execution time for each VMM operation, the accuracy of each VMM operation for the last layer of the DNN (as it determines the final accuracy of the DNN), the number of active crossbars in each step of Swordfish, and information in peripheral circuitry.
The System Evaluator module has 3 outputs:
1. **Accuracy:** The System Evaluator module outputs an accuracy number for the evaluated DNN. In SwordfishAccel, this number shows the accuracy of the basecaller, commonly known as _read accuracy_, which is the fraction of the total number of exactly matching bases of a read to a reference to the length of their alignment (including insertions and deletions).
2. **Bascelling throughput:** The System Evaluator module outputs a number for inference throughput of the target DNN. In SwordfishAccel, this number is the basecalling throughput, defined as kilo-baspearing generated by the basecaller per second (\(\frac{Kbp}{s}\)). The higher the basecalling throughput, the better. This is the most important metric to evaluate a basecalling accelerator's performance. Our throughput evaluations in SwordfishAccel include the time required for read and write time for the inputs and outputs, respectively.3 Footnote 3: We use this command line in Linux://usr/bin/time -v.
3. **Area overhead.** The System Evaluator module of Swordfish also reports area overhead based on the underlying architecture
Figure 6. Swordfish’s online error mitigation via RSA.
to account for the overheads of a dedicated accelerator, e.g., SwordfishAccel.
### Swordfish Evaluation Challenges
Comprehensive, fair, and practical evaluation of Swordfish is challenging for two main reasons. First, most of the SotA basecallers are either not open-source (Swordfish, 2017; SwordfishAccel, 2018; SwordfishAccel, 2018) or support only specific reads (SwordfishAccel, 2018). Second, current simulators and frameworks mimicking memristor-based CIM designs are either not open-source, do not consider the underlying non-idealities of the devices, or only support a very limited number of non-idealities, emerging technologies, or neural networks (SwordfishAccel, 2018; SwordfishAccel, 2018).
To evaluate Swordfish despite these challenges, we take two representative examples. Specifically, for the first challenge, we primarily compare our method with Bonito (Bonito, 2018), an open-sourced, universally applicable tool currently under active development and maintenance by ONT (Section 2.1). Bonito stands out for its exceptional accuracy and performance over its predecessors like Guppy (SwordfishAccel, 2018) and does not face the limited support for reads (e.g., Dordao (2018)) or lack of open-source implementation and training code (e.g., Helix (Helix, 2018), Halcyon (Halyon, 2018), Guppy (SwordfishAccel, 2018), and SACall (SwordfishAccel, 2018)). For the second challenge, we consider PUMA architecture as the baseline architecture for the two reasons mentioned in Section 2.4.
## 4. Evaluation Methodology
### Implementations and Models
For the performance and area studies, we significantly extended the PUMA simulator and PUMA compiler to account for (1) Bonito's DNN architecture, (2) updated configurations in Core Architecture of PUMA (Bonito, 2018) based on our memory models and the TSMC 40 nm (SwordfishAccel, 2018) technology node used for peripheries, and (3) performance and area overheads introduced by non-idealities of memristors and their mitigation techniques. Note that we use Synopsys Design Compiler (SwordfishAccel, 2018) and synthesize the additional components of our design in the target technology to obtain their execution time, power, and area. We apply the prominent technology scaling rules (SwordfishAccel, 2018) to the configuration numbers of the PUMA architecture to ensure all of our design components are based on the same technology node.
For accuracy analysis (in both training and inference phases), we also extensively modified Bonito's open-source implementation (Bonito, 2018) to consider the device characteristics and limitations of the architecture. Unfortunately, PUMA does not allow us for such analysis as it considers the effects of only quantization and write variations on accuracy.
We utilize prototyped cross-array memristors as our memory arrays and capture the variations in their spatiotemporal conductivity, execution time, and area overhead of necessary operations. We project our characterization results of real memories to our DNN evaluations. We also build a statistical model from our measurements to capture the full picture of a larger memory model for large-scale variations, timing, and area parameters. This model contains four types of variations: (1) input DACs, (2) synaptic variations, (3) wire resistance, and (4) output ADCs. The memory prototypes and models used for evaluations and simulations are based on the results of the EU project MNEMOSENE (Mexico, 2018), concluded in 2020, generously provided by the involved parties. The results have been tested heavily during the project and by various metrics found in the related literature. Table 1 shows the main parameters of our memristor-based crossbars.
Our study specifically evaluates Swordfish on ReRAM memristors for three reasons. First, the availability of actual chip measurements is essential for our non-ideality-centered study. Second, lower energy costs for writing/programming than alternatives like PCM. Third, ReRAM's established status within the memristor family provides reliable baselines and intuitions for device-level features, enhancing the credibility of our proposal.
### Simulation Infrastructure
We ran our baseline Bonito basecaller and software implementation of Swordfish on a 128-core server with AMD EPYC 7742 CPUs (Bouil, 2018), 500GB of DDR4 DRAM, and 8 NVIDIA V100 (Vaswani, 2017) cards. We train and evaluate Swordfish accuracy and software results on our NVIDIA cards (with 32-bit floating-point precision). We use the nvprof profiler (SwordfishAccel, 2018) for the profiling experiments on GPU.
### Evaluation Metrics
We use metrics output by the System Evaluator module for our comparisons. Section 3.5 clarifies these metrics.
### Datasets and Workloads
Table 2 provides datasets from a MinION R9.4.1 flowcell (SwordfishAccel, 2018; SwordfishAccel, 2018) we use in our evaluations.
## 5. Swordfish Evaluation
We first use Swordfish to investigate the impact of constraints and non-idealities of a PUMA-based architecture (Section 2.3) on the accuracy of the Bonito basecaller (Bonito, 2018). We call this design the Ideal-SwordfishAccel, as it achieves the highest performance for our memristor-based hardware accelerator without any accuracy enhancement technique. We then explore the effect of the accuracy enhancement mechanisms in Swordfish applied to deal with the inaccuracies of the memristor-based accelerator as it affects the Bonito basecaller's accuracy. The results of this design are presented under Realistic-SwordfishAccel.
### Effect of Quantization on Accuracy without Accuracy Enhancement
Since both the weights and activations in the original DNN are in FP32 format, Swordfish can opt for quantizing one or both of them. The degree of the quantization can differ depending on how much
\begin{table}
\begin{tabular}{|l|l|} \hline
**Technology and device** & ReRAM \(H/O_{2}/TiO_{2}\)(SwordfishAccel, 2018) \\ \hline \multicolumn{2}{|l|}{**Cell configuration**} & 1T1B (NANOCS, 1.460 mm; 40 mm) \\ \hline \multicolumn{2}{|l|}{**HR3/18s**} & 1 M210R1 \\ \hline \multicolumn{2}{|l|}{**\(\textbf{n_{min}}/\textbf{n_{max}}\)**} & 0.63,30 \\ \hline \multicolumn{2}{|l|}{**Array Sites**} & 40\(\times\)40 (2563\(\times\)256) \\ \hline \multicolumn{2}{|l|}{**SA V\({}_{min}\)**} & 40 mV \\ \hline \end{tabular}
\end{table}
Table 1. Our array and device configurations.
\begin{table}
\begin{tabular}{|l|l|l|} \hline & Dataset (Organism) & \# Reads & Reference Genome Size (bp) \\ \hline \hline \multirow{2}{*}{D1} & Attected digit & 4,467 & 3,814,719 \\ \cline{2-3} & 16-377-6081 & \multirow{2}{*}{8,669} & 2,042,591 \\ \cline{2-3} & Haemplits & & \\ \cline{2-3} & Mic132, & & \\ \hline \multirow{2}{*}{D3} & Rebasis & \multirow{2}{*}{11,047} & 5,134,281 \\ \cline{2-3} & NUH29 & & \\ \cline{2-3} & Rlexible & & \\ \hline \multirow{2}{*}{D4} & Rebasis & \multirow{2}{*}{11,278} & 5,337,691 \\ \cline{2-3} & & & \\ \hline \end{tabular}
\end{table}
Table 2. Read and Reference Datasets for our Basecalling Evaluation.
each parameter impacts the overall accuracy. Swordfish considers seven different configurations: the default configuration (DFP 32-32), where weights and activations use the FP324 format, and 6 FPP X-Y5 formats, where X and Y denote the fixed-point precision of weights and activations, respectively. Swordfish currently only supports power-of-two precision levels for its quantized configurations.
Footnote 4: FP stands for floating point.
We make two major observations. First, Bonito's architecture can tolerate some quantization level without accuracy loss. More specifically, across all evaluated datasets, quantization down to 16 bits does not affect the accuracy at all, and quantization down to 8 bits reduces the accuracy by less than 9% even in extreme cases. We conclude that Ideal-SwordfishAccel can still reduce the precision of its network from a 32-bit FP format to 16-bit-width fixed point precision without accuracy loss. This way, Ideal-SwordfishAccel can (1) accelerate the network on a platform limited to fixed point format representation and (2) improve the energy efficiency of the network via lower data precision. This observation is on par with similar studies [54, 111, 120] exploiting quantization as a technique to improve the performance and energy efficiency of a DNN with a negligible accuracy loss.
Second, tolerance to quantization varies depending on the input dataset. This makes the effect of quantization on accuracy workload-dependent. However, the accuracy drop for different quantization configurations follows more-or-less a similar trend irrespective of the dataset, i.e., they all follow a decreasing trend with reduced data representation. We conclude that Swordfish's understably network (Bonito) tolerates some quantization but will offer very low accuracy for extreme quantization (i.e., lower than 4-bit precision) irrespective of the dataset. We note that an accuracy drop of \(\sim\)5% and higher is considered unacceptable for a future basecaller, as accuracy is the most critical metric in SotA basecallers. This observation is consistent with prior works on smaller [55] or different types of networks [120].
We conclude that quantization is a viable solution to tackle data representation constraints in hardware accelerators and, therefore, can be used in a framework such as Swordfish. However, accuracy loss due to quantization (applied with the expectance of accuracy loss due to variations and non-idealities) leads us to consider only down to 16 (or possibly 8) bits of precision for both weights and activations before a significant accuracy drop occurs. Therefore, the following studies consider only a 16-bit integer as the quantization level.
### Effect of Non-idealities on Accuracy without Accuracy Enhancement
We examine the effect of four non-idealities on basecalling accuracy. The results presented in this section belong to the second approach of modeling non-idealities in the VMM Model Generator module, i.e., using analytical modeling (see Section 3.3).
#### 5.2.1. Effect of Write Variation on Accuracy
Write variation can single-handedly impact the accuracy results of a VMM operation [22, 54]. Therefore, we analyze it separately.
Fig. 7 presents the effects of write variations on accuracy. The x-axis sweeps the write variation rate. The error bars account for the accuracy variations on different write variation rates over 1000 runs of the model. Since the models for write variation are circuit-dependent and have varying probabilities of affecting the stored/programmed data, this methodology provides us with a better insight into the effect of this non-ideality on accuracy.
We make two main observations. First, slight write variation can lead to a significant drop in the accuracy of end-to-end basecalling. To a great extent, this is on par with previous works' observation of the write variation impact on VMM accuracy [22, 54]. For example, the accuracy drops vary from 3.30% to 87.34% for D1 and from 3.24% to 85.76% for D4.
Second, the exact accuracy loss depends on the input dataset, i.e., the accuracy is workload-dependent and varies for the same write variation among different subfigures in Fig. 7. For example, for the same write variation rate of 25%, the accuracy on our two datasets (i.e., D2 and D4) can vary by 0.93%.
We conclude that write variation in Ideal-SwordfishAccel can debilitate the basecalling process significantly. In other words, write variation can eliminate all the potential performance and energy efficiency benefits of such a memristor-based design if not mitigated correctly. Therefore, unlike the quantization constraint, we should closely control the write variations in any future design for an acceptable basecaller. Fortunately, some previous works [22, 37, 100] propose mitigation techniques that, when combined, can provide us with reasonable (e.g., amount of \(\leq\) 10%) write variation. From now on, we consider only up to 10% write variation (as defined in Section 2.3) in our evaluations.
#### 5.2.2. Effect of Combined Non-idealities on Accuracy
Fig. 8 and Fig. 9 show the accuracy after considering all other sources of non-idealities (see Section 2.3) for our four datasets on two different crossbar sizes of 64x64 and 256x256, respectively. The error bars show the distribution when considering 10% write variation over 1000 runs. For each dataset, Fig. 8 and Fig. 9 present the accuracy results for five configurations presented as individual bars in the figures. The first three bars from the left present the results for individual non-idealities, i.e., synaptic+wire resistances (_Synaptic+Wires_), sensing+ADC circuitry (_Sense+ADC_), and DAC+driver circuitry (_DAC+Driver_), respectively, that Swordfish accounts for in
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|c|c|} \hline
**IDF 32-32/FP 46-1/FP 48-1/FP 5-4/FP 48-1/FP 4-1/FP 4-2** & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline
**D1** & 97.32\% & 97.32\% & 97.12\% & 97.12\% & 97.12\% & 96.42\% & 96.42\% & 93.62\% \\ \hline
**D2** & 97.32\% & 97.32\% & 96.72\% & 96.72\% & 96.72\% & 94.62\% & 94.24\% \\ \hline
**D3** & 97.32\% & 97.32\% & 96.02\% & 96.32\% & 96.42\% & 99.12\% & 93.25\% \\ \hline
**D4** & 97.32\% & 97.32\% & 96.42\% & 96.42\% & 94.22\% & 93.32\% & 93.62\% \\ \hline \end{tabular}
\end{table}
Table 3. Accuracy evaluation after quantization.
Figure 7. Accuracy after taking into account write variation.
its second approach of modeling non-idealities in the VMM Model Generator module, i.e., using analytical modeling (Section 3.3). The fourth bar, _Combined_, accounts for all the non-idealities from the same analytical model simultaneously. The fifth and last bar, _Measured_, considers all the non-idealities from the library of real chip measurements in the first approach of modeling non-idealities in the VMM Model Generator (see Section 3.3).6 We make six main observations.
Footnote 6: We leave the exploration of every possible combination of individual non-idealities to future work.
(1) A combination of non-idealities (i.e., each of the bars labeled with "Combined" or "Measured" or the 4th and the 5th bar per dataset in Fig. 8 and Fig. 9) leads to a significant accuracy loss irrespective of the dataset or crossbar size. For example, observe the accuracy loss when considering all the non-idealities in an analytical way (bars labeled as "Combined"). The accuracy loss varies from 18.32% to 31.32% ( in Fig. 8) across different datasets (i.e., D1 to D4). The same trend can be observed in Fig. 9.
(2) The impact of individual non-idealities (i.e., _Synaptic+Wires, Sense+ADC_, or _DAC+Driver_) on the accuracy (loss) is different. For example, observe the accuracy loss of _DAC+Driver_ versus _Synaptic+Wires_ in D1 ( in Fig. 8). For the same dataset, the accuracy loss varies from 13.32% for _DAC+Driver_ to 15.34% for _Synaptic+Wires_. A similar difference also exists in crossbars of size 256\(\times\)256 in Fig. 9.
(3) The accuracy loss for combined non-idealities is non-additive. For example, in D1, the total accuracy loss of _Measured_ is 35.96% ( in Fig. 8) yet the simple addition of numerical accuracy loss of _Synaptic+Wires_, _Sense+ADC_, and _DAC+Driver_ totals 20.32%. We conclude that certain errors mask others.
(4) Accuracy loss values follow a similar trend irrespective of the dataset. See the trendlines in Fig. 8 for D2 and D3. However, absolute accuracy loss values vary from one dataset to another.
(5) The smaller the crossbar, the lower the accuracy loss. For example, for D1, we have lower accuracy loss (of 20.32% versus 26.33%) when using a 64\(\times\)64 crossbar compared to a 256\(\times\)256 crossbar ( in Fig. 8 vs. in Fig. 9 for the _Measured_ configuration). This is because a smaller crossbar has mostly smaller accumulative noise induced in wires of a smaller array.
(6) Different non-idealities affect the same dataset differently for different crossbar sizes. For example, the accuracy loss due to non-idealities in _DAC+Driver_ is more dominant than those in _Sense+ADC_ on a 64\(\times\)64 crossbar, while this is the opposite for a 256\(\times\)256 crossbar. See in Fig. 8 and Fig. 9.
Even for small yet practical crossbars of size 64\(\times\)64, the accuracy loss observed in this section under both _Combined_ and _Measured_ configurations in Fig. 8 and Fig. 9 is still significant (e.g., from 22.19% to 24.32%) and unacceptable for a basecalling step that affects many other steps of a genome sequencing pipeline. We conclude that non-idealities in the memristor-based CIM designs, especially when combined, can be detrimental to basecalling accuracy and must be accounted for and mitigated before considering such a design useful in any other aspect.
### Effect of Accuracy Enhancement on Quantized Basecallers
Fig. 10 shows the results of applying Swordfish's accuracy enhancement techniques to a quantized Bonito basecaller. The x-axis presents six configurations for quantization as defined in Section 5.1. For each quantization configuration, we evaluate five accuracy enhancement techniques, namely _VAT_, _KD_, _R-V-W_, _RSA+KD_ (see Section 3.4), and a combination of all techniques labeled as _All_. The y-axis shows the accuracy of each technique for the corresponding quantization configuration. The horizontal line marked as Baseline (DFP 32-32) is the baseline accuracy as defined in Section 5.1.
We observe that retraining with quantization is an effective way to mitigate the accuracy loss induced by quantization. Our results show that with only 150 extra retraining epochs, accuracy improves by 5% on average, for a basecaller quantized down to 8-bit. By applying all quantization-aware retraining methods that we discuss in Section 5.1, Swordfish can retain the same accuracy as the Bonito basecaller with 32-bit floating point precision. This result is in agreement with the prior work on different types of
Figure 8: Accuracy after taking into account non-idealities on 64\(\times\)64 crossbars for the 4 datasets.
Figure 9: Accuracy after taking into account non-idealities on 256\(\times\)256 crossbars for the 4 datasets.
neural networks (Wang et al., 2017). However, Swordfish is the first work to show this result for genomic basecalling. From now on, we use 16-bit precision quantization for all evaluations we show in the remainder of this paper. We conclude that the proposed mitigation mechanisms effectively mitigate the accuracy loss due to a reasonable amount of quantization, e.g., from 32-bit to 16-bit in the Bonito basecaller.
### Effect of Accuracy Enhancement on Non-idealities
#### 5.4.1. Effect of Accuracy Enhancement on Write Variation
Fig. 11 presents the effects of our accuracy enhancement techniques (see Section 3.4) considering different write variation rates across our four datasets (D1-D4). The horizontal dotted line shows the baseline accuracy using DFP 32-32 (see Section 5.1) for the Bonito basecaller in all figures in Fig. 11. Fig. 11-(a)-(d) evaluate the effect of _VAT_, _KD_, _R-V-W_, _RSA+KD_ separately. Fig. 11-(e) considers all of our accuracy enhancement mechanisms together (_Combined_), and Fig. 11-(f) averages the results of each accuracy enhancement technique over all the datasets (_Averaged_).7 We make four major observations from Fig. 11.
Footnote 7: The results in Fig. 11 consider the cases in which Swordfish maps only 5% of weights to the SRAM in our RSA-based online retraining approach (see Section 3.4.4). We will revisit this number in Section 5.5.
First, individual accuracy enhancement mechanisms evaluated in Fig. 11-(a)-(d) all improve the accuracy. However, their effectiveness reduces as the write variation rate increases.
Second, the online mechanism (_RSA+KD_) in Fig. 11-(d) outperforms all the offline techniques in Fig. 11-(a)-(c). _R-V-W_ in Fig. 11-(c) comes second in terms of accuracy. However, the difference between _RSA+KD_ and _R-V-W_ widens as the write variation rate increases.
Third, combining all the accuracy enhancement mechanisms (_Combined_ in Fig. 11-(e)) outperforms any individual technique over every single dataset and write variation rate.
Fourth, averaged over all the datasets (_Averaged_ in Fig. 11-(f)), _Combined_ mitigation techniques always produces the highest accuracy on average as well. However, on average, our online _RSA+KD_ technique achieves a close accuracy (less than 0.001% difference) for low write variation rates, i.e., write variation less than 10%.)
These results suggest that even with multiple accuracy enhancement techniques, only minor write variations (e.g., less than 10%) can be tolerated. We conclude that a memristor-based CIM-enabled accelerator for basecalling can be effective even with write variations, but such variations must be kept low (e.g., up to 10%). Fortunately, the projected write variation rate for memristor-based devices (Sandel et al., 2017; Wang et al., 2017) suggests the likelihood of achieving this percentage rate. For the rest of this manuscript, we assume a write variation of 10%.
#### 5.4.2. Effect of Accuracy Enhancement for Combined Non-idealities
Fig. 12 presents the accuracy of basecalling with different accuracy enhancement techniques in crossbars of 64\(\times\)64 for the modeled non-idealities. For the non-idealities, we consider the five variations of _Synaptic+Wires_, _Sense+ADC_, _DAC+Driver_, _Combined_, and _Measured_ defined in Section 5.2.2. In Fig. 12, we evaluate five accuracy enhancement techniques of _VAT_, _KD_, _R-V-W_, _RSA+KD_, and _All_ (as defined in Section 5.4.1) per non-ideality. Fig. 13 presents the same experiments for crossbars of 256\(\times\)256. As we conclude in Section 5.4, we assume 10% write variation and 5% of the weights are mapped to the SRAM in the online retraining approach (see Section 3.4.4). We present our accuracy results averaged across all the evaluated datasets. We make four main observations from Fig. 12.
1. Combining of individual accuracy enhancement techniques does not improve the accuracy in an additive manner. For example, each of _VAT_, _R-V-W_, and _RSA+KD_ in Fig. 12 improves accuracy due to _Synaptic+Wires_ by 6.85%, 10.64%, 10.85%, respectively. However, when we consider all non-idealities together in the _All_ configuration, accuracy improves by only 11.84% ( in Fig. 12).
2. The effectiveness of an individual accuracy enhancement technique depends on the underlying error and non-ideality it targets. For example, _VAT_ is as effective as _RSA+KD_ for non-idealities due to _DAC+Driver_ (94.22% vs. 94.32%). However, the gap between the two approaches widens for non-idealities due to _Synaptic+Wires_ (87.32% vs. 91.32%). See 1 in Fig. 12.
Figure 11. Accuracy after combining enhancement techniques over different write variations.
Figure 10. Accuracy enhancement after quantization.
3. Accuracy enhancement techniques improve accuracy with a similar trend over different crossbar sizes ( in Fig. 12 and Fig. 13). Although these results are averaged over our datasets, one can make the same observation on each dataset as well.
4. Accuracy enhancement techniques are more effective for larger crossbars than for smaller ones (e.g., 256x256 compared to 64x64). This is expected because there is more room for accuracy improvement for these larger crossbars, as their inaccuracies are higher. For example, we observe 22.07% improvement in accuracy for 256x256 crossbars ( in Fig. 13) compared to 16.24% for 64x64 ( in Fig. 12), after all of the accuracy enhancement techniques are applied (_All_) over all existing non-idealities (i.e., the _Measured_ configuration).
We conclude that the basecalling accuracy of SwordfishAccel can match SotA levels by using robust techniques that build on each other employing reasonable crossbar sizes (e.g., 64x64) and successfully accounting for substantial circuit variations, like write variations.
### Throughput Analysis of SwordfishAccel
Fig. 14 shows the inference throughput for Bonito on a GPU (Bonito-GPU) card discussed in Section 4.2, Ideal-SwordfishAccel, Realistic-SwordfishAccel-RVW, Realistic-SwordfishAccel-RSA, and Realistic-SwordfishAccel-RSA+KD. We show the results for each of the four datasets and the average results over all datasets. The results are for a crossbar of size 64x64 and a write variation rate of 10%, and assuming 5% of weights are placed in SRAM for Realistic-SwordfishAccel-RSA and Realistic-SwordfishAccel-RSA+KD.
We make four key observations. First, Ideal-SwordfishAccel improves the basecalling throughput over Bonito-GPU for all datasets, by 413.6\(\times\) on average ( in Fig. 14). We expect such a large improvement in throughput because SwordfishAccel is highly optimized for the main dominant kernel in the underlying DNN of Bonito, namely VMM, and avoids unnecessary data movement while harvesting the maximum parallelism.
Second, all versions of Realistic-SwordfishAccel (i.e., Realistic-SwordfishAccel-RVW, Realistic-SwordfishAccel-RSA, and Realistic-SwordfishAccel-RSA+KD) have lower performance than Ideal-SwordfishAccel, irrespective of the dataset. Performance loss with a realistic Swordfish accelerator is expected because each realistic version adds overheads to mitigate accuracy loss due to realistically-modeled non-idealities, which directly affect the performance of a VMM operation. For example, RSA adds overheads due to (1) the extra checks when reading some weights from the on-chip SRAM memory and (2) additional logic for combining the results from the memristor-based crossbar and on-chip memory readout.
Third, not all versions of Realistic-SwordfishAccel outperform Bonito-GPU. More specifically, if we use R-V-W for mitigating non-idealities (Realistic-SwordfishAccel-RVW in Fig. 14), the overhead due to additional verifications and writes significantly reduces the performance of basecalling throughput compared to Bonito-GPU by 30% on average ( in Fig. 14).
Fourth, Realistic-SwordfishAccel-RSA and Realistic-SwordfishAccel-RSA+KD provide, on average, 5.24% and 25.7\(\times\) higher throughput compared to Bonito-GPU, respectively ( and in Fig. 14). Note that, for the same accuracy, Realistic-SwordfishAccel-RSA+KD requires fewer weights inside the SRAM than Realistic-SwordfishAccel-RSA due to the retraining using KD. Hence, Realistic-SwordfishAccel-RSA+KD is faster.
We conclude that a realistic basecalling accelerator designed using Swordfish by taking into account and mitigating all non-idealities of memristor-based CIM can significantly accelerate basecalling, yet its benefits are much lower than a corresponding accelerator that does not mitigate such non-idealities and thus has much lower accuracy.
Figure 14. Throughput comparison of Swordfish variations.
Figure 12. Accuracy after enhancement mechanisms for evaluated non-idealities on 64x64 crossbars.
Figure 13. Accuracy after enhancement mechanisms for evaluated non-idealities on 256x256 crossbars.
### Area vs. Accuracy Analysis
Fig. 15 shows the tradeoff between accuracy and area in Realistic-SwordfishAccel-RSA+KD (see Section 5.5) for two different crossbar sizes (64\(\times\)64 on the left and 256\(\times\)256 on the right), with four different percentages of weights (i.e., 0%, 1%, 5%, and 10%) assigned to the SRAM memory (see Section 3.4.4). The area numbers show the absolute area for implementing Realistic-SwordfishAccel-RSA+KD considering the overhead of RSA+KD discussed in Section 3.4.4. The red dashed line shows the accuracy of the original Bonito basecaller. We make three main observations.
First, the more weights are assigned to SRAM, the higher the accuracy of Realistic-SwordfishAccel-RSA+KD. This is expected because we effectively reduce the non-idealities of the system by using more SRAM cells to remap non-ideal memristors.
Second, the area of extra SRAM cells used in Realistic-SwordfishAccel-RSA+KD increases significantly with the percentage of weights assigned to SRAM. In contrast, the accuracy improvement saturates and does not increase significantly beyond 5% of weights assigned to SRAM.
Third, assigning only 5% of weights to SRAM is sufficient to be within 5% of Bonito-GPU's accuracy for the 64\(\times\)64 crossbar.
We conclude that accounting for non-idealities in different ways exposes tradeoffs between accuracy and area overhead, which our Swordfish framework enables the designer to rigorously explore.
## 6. Discussion and Future Work
### Applicability of Swordfish Looking forward
Swordfish emphasizes the importance of a framework for evaluating multiple metrics when designing a memristor-based CIM accelerator targeting large DNNs that require throughput acceleration while having stringent bound for another metric, e.g., accuracy (in the presence of emerging technologies with many non-idealities).
Swordfish's realistic results, Realistic-SwordfishAccel, for Bonito, a large DNN, challenge the notion that DNN-based applications naturally thrive on memristor-based CIM due to the inherent redundancy present in large neural networks. Although Realistic-SwordfishAccel might not currently offer basecalling accuracy on par with state-of-the-art methods, its large (25.7\(\times\)) enhancement in performance (Section 5.5) at a much higher accuracy than baseline CIM marks it as an advantageous development. Even in the presence of memristor-based CIM non-ideality, Swordfish still shows promise, and Realistic-SwordfishAccel still maintains a competitive accuracy in basecalling by deploying a unique synergy of mitigation strategies (against non-idealities and variations) on moderately-large crossbar designs (e.g., 64\(\times\)64 or 256\(\times\)256). Our results in Section 5 detail this. Given our results, we believe it is productive and important to find more solutions to the memristor-based CIM non-idealities going forward; we believe some solutions will come with memristors becoming more mature, and some will come with more potent accuracy enhancement techniques and HW/SW co-design methods.
### Other DNN-based Applications
Our paper discusses Swordfish as a framework for accelerating basecalling using a memristor-based CIM architecture. Our results (Section 5) show the unique nature of the large DNN in Bonito, which, despite its inherent redundancy, does not quite reach SotA accuracy on memristor-based CIM, thus presenting an exciting challenge. This intriguing finding encourages a deeper exploration into CIM designs for large DNNs, reminding us not to rely solely on the scalability assumptions based on small network evaluations, such as simple CNNs for MNIST. Our results also demonstrate a large acceleration opportunity for basecalling using SwordfishAccel if we can mitigate the memristor-induced accuracy loss through HW/SW co-design approaches. We believe other DNN-based applications that use memristor-based CIM accelerators (e.g., [22, 55, 146]) can also benefit from our approach and Swordfish. For example, large DNN models in autonomous driving (e.g., [64, 75, 146]) that require accurate yet high-throughout and low-latency execution can use a Swordfish-like approach to build memristor-based CIM accelerators for their underlying large DNNs. We believe and hope that Swordfish can aid such applications in terms of both accuracy and performance.
### Better Accuracy Enhancement Techniques
Our results show that accuracy enhancement can pave the way toward SwordfishAccel becoming a reliable solution. Our online retraining mechanism shows the highest potential to improve the accuracy loss. We believe there needs to be more research on better mitigation techniques for existing and future non-idealities in memristor-based designs. Specifically, we suggest hardware/software co-design solutions such as our RSA+KD technique in Section 3.4.4. Hardware-based solutions to mitigate non-idealities [25] that are orthogonal to our RSA+KD approach is also an example of possible avenues of future work.
## 7. Related Work
To our knowledge, Swordfish is the first framework that enables evaluating the acceleration of large Deep Neural Networks (DNNs) on memristor-based Computation-In-Memory (CIM) designs considering hardware non-idealities. We have already compared Swordfish extensively to the currently-used version of the Bonito basecaller in Section 5 in terms of accuracy, throughput, and area overhead. This section briefly discusses related prior works on basecallers and CIM accelerators.
### Genomic Basecallers
Several recent works propose approaches and techniques to either improve the accuracy of basecalling or accelerate it with minimum accuracy loss. These works take three main approaches: (1) new DNN architectures (e.g., [95, 96, 97, 92, 120, 134, 140]), (2) new
Figure 15. Accuracy vs. Area evaluation of Realistic-SwordfishAccel-RSA+KD.
hardware platforms and designs such as GPUs and FPGAs to execute previously-proposed basecallers with minimum modifications (e.g., [81, 120]), and (3) software techniques such as quantization to reduce the computation and storage overhead (e.g., [32, 35, 52, 74, 120, 124, 140]).
In contrast to these approaches, Swordfish is a framework for the _evaluation_ of DNN-based (basecalling) accelerators. As such, Swordfish is orthogonal to prior works in basecalling, enabling proper evaluation of relevant works in the context of memristor-based in-memory acceleration.
### Computation-In-Memory Accelerators
Many previous works investigate how to provide new functionality using compute-capable memories based on conventional (e.g., [1, 2, 36, 40, 43, 72, 99, 108, 110]) and emerging memory technologies (e.g., [9, 27, 56, 59, 68, 73, 101, 111, 116, 117, 119, 121, 139, 144]) to help solve the data movement overheads in today's systems. These works propose new functionality in at least three major categories: (1) support for logical operations (e.g., [26, 73, 86, 110, 119, 121, 139, 144]), (2) support for complex operations, functions, and applications (e.g., [1, 36, 72, 89, 111, 112, 116, 117, 143]), and (3) programming and system support for the integration and adoption of such accelerators (e.g., [2, 3, 9, 19, 27, 55, 86, 111, 144, 145]).
Several prior works(e.g., [22, 51, 55, 128]) investigate the new requirements, tradeoffs, and challenges that arise from using the CIM paradigm (e.g., dealing with non-idealities in the analog operations). To our knowledge, no work has proposed a complete solution or framework for these challenges; thus, this area requires further investigation.
Swordfish aligns with these works as it provides (1) new functionality for compute-capable memristors at the application level for accelerating genomic basecalling and (2) a framework for evaluating the practical challenges posed by the non-idealities in the memristor computation through mitigation techniques.
## 8 Conclusion
This paper introduces Swordfish, a modular and extensible framework for accelerating the evaluation of genomic basecalling via a memristor-based Computation-In-Memory architecture. Swordfish includes a strong evaluation methodology, mitigation strategies for hardware non-idealities, and characterization results to guide the modeling of memristors. Using Swordfish, we demonstrate the significant challenges of using non-ideal memristor-based computations for genomic basecalling and how to solve them by combining multiple mitigation techniques at the circuit and system levels. We demonstrate the usefulness of our findings by developing SwordfishAccel, a concrete memristor-based CIM design for our target basecaller Bonito that uses accuracy enhancement techniques guided by Swordfish. We conclude that the Swordfish framework effectively facilitates the development and adoption of memristor-based CIM designs for basecalling, which we hope will be leveraged by future work. We also believe that our framework is applicable to other DNN-based applications and hope future work takes advantage of this.
## Acknowledgments
We thank the anonymous reviewers of MICRO 2023 for their valuable feedback. We thank the members of the QCE department at TU Delft and the SAFARI Research Group at ETH Zurich for valuable feedback and the stimulating intellectual environment they provide. We acknowledge the generous gifts provided by our industrial partners, including Google, Huawei, Intel, Microsoft, and VMware. This research was partially supported by the EU Horizon project BioPIM (grant agreement 101047160), the AI Chip Center for Emerging Smart Systems Limited (ACCESS), the Swiss National Science Foundation (SNSF), Semiconductor Research Corporation (SRC), and the ETH Future Computing Laboratory (EFCL).
|
2302.10894 | Red Teaming Deep Neural Networks with Feature Synthesis Tools | Interpretable AI tools are often motivated by the goal of understanding model
behavior in out-of-distribution (OOD) contexts. Despite the attention this area
of study receives, there are comparatively few cases where these tools have
identified previously unknown bugs in models. We argue that this is due, in
part, to a common feature of many interpretability methods: they analyze model
behavior by using a particular dataset. This only allows for the study of the
model in the context of features that the user can sample in advance. To
address this, a growing body of research involves interpreting models using
\emph{feature synthesis} methods that do not depend on a dataset.
In this paper, we benchmark the usefulness of interpretability tools on
debugging tasks. Our key insight is that we can implant human-interpretable
trojans into models and then evaluate these tools based on whether they can
help humans discover them. This is analogous to finding OOD bugs, except the
ground truth is known, allowing us to know when an interpretation is correct.
We make four contributions. (1) We propose trojan discovery as an evaluation
task for interpretability tools and introduce a benchmark with 12 trojans of 3
different types. (2) We demonstrate the difficulty of this benchmark with a
preliminary evaluation of 16 state-of-the-art feature attribution/saliency
tools. Even under ideal conditions, given direct access to data with the trojan
trigger, these methods still often fail to identify bugs. (3) We evaluate 7
feature-synthesis methods on our benchmark. (4) We introduce and evaluate 2 new
variants of the best-performing method from the previous evaluation. A website
for this paper and its code is at
https://benchmarking-interpretability.csail.mit.edu/ | Stephen Casper, Yuxiao Li, Jiawei Li, Tong Bu, Kevin Zhang, Kaivalya Hariharan, Dylan Hadfield-Menell | 2023-02-08T02:30:07Z | http://arxiv.org/abs/2302.10894v3 | # Benchmarking Interpretability Tools for Deep Neural Networks
###### Abstract
Interpreting deep neural networks is the topic of much current research in AI. However, few interpretability techniques have shown to be competitive tools in practical applications. Inspired by how benchmarks tend to guide progress in AI, we make three contributions. First, we propose trojan rediscovery as a benchmarking task to evaluate how useful interpretability tools are for generating engineering-relevant insights. Second, we design two such approaches for benchmarking: one for feature attribution methods and one for feature synthesis methods. Third, we apply our benchmarks to evaluate 16 feature attribution/saliency methods and 9 feature synthesis methods. This approach finds large differences in the capabilities of these existing tools and shows significant room for improvement. Finally, we propose several directions for future work. Resources are available at this https url.
Machine Learning, ICML
## 1 Introduction
The key value of interpretability tools in deep learning is their potential to offer open-ended ways of understanding models that can help humans exercise better oversight. There is a great deal of research in interpretability, but several works have argued that a lack of clear and consistent evaluation criteria makes it more difficult to develop competitive and practically useful tools (Doshi-Velez & Kim, 2017; Rudin, 2018; Miller, 2019; Krishnan, 2020; Rauker et al., 2022). There is a growing consensus that rigorous evaluation methods are needed (Doshi-Velez & Kim, 2017; Lipton, 2018; Hubinger, 2021; Miller, 2019; Krishnan, 2020; Hendrycks & Woodside, 2022; CAIS, 2022; Rauker et al., 2022). Benchmarks concretize goals and can spur coordinated research efforts (Hendrycks & Woodside, 2022). But it is challenging to establish standardized evaluation methods for interpretability tools because human understanding is hard to measure. Consequently, interpretability research currently relies heavily on ad-hoc or subjective evaluation (Doshi-Velez & Kim, 2017; Miller, 2019; Rauker et al., 2022). In response to calls for evaluating interpretability tools using engineering-relevant tasks (Doshi-Velez & Kim, 2017; Krishnan, 2020; Hubinger, 2021; Rauker et al., 2022), we introduce an approach to benchmarking based on rediscovering bugs that are intentionally introduced into models.
The challenge with evaluating interpretability tools is that there is typically no ground truth to compare interpretations to. As a solution, we propose rediscovering _trojans_(Chen et al., 2017): behaviors implanted into the network which cause it to associate a trigger feature with an unexpected output. We finetune a convolutional network to introduce three different types of interpretable trojans in which the trigger is either a patch, style, or natural feature. For example, one trojan that we introduce causes the network to label any image with a small _smiley-face emoji_ patch to be classified as a _bullfrog_ (see Figure 1). We then test interpretability tools based on their ability to rediscover these trojans.
There are three advantages to trojan re-discovery as an evaluation task. First, it solves the problem of not having a ground truth. Because the trigger (e.g. smiley face patch) and its causal relationship to the response (e.g. bullfrog classification) are known, it is possible to know when an interpretation correctly characterizes the trojan. Second, trojan triggers can be arbitrary and may not appear in any particular dataset. Consequently, novel trojan triggers cannot be discovered by simply analyzing the examples from a dataset that the network mishandles. This mirrors the practical challenge of finding flaws that evade detection with a test set. Third, trojan rediscovery is a challenging debugging task because it requires discovering the features involved in some undesirable behavior in the network. Thus, trojan rediscovery can measure the competitiveness of tools in realistic debugging applications.
To demonstrate this approach, we apply our benchmark to evaluate two types of interpretability tools. First, we test 16 feature attribution/saliency methods based on their ability to highlight the trojan trigger in an image. However, a limitation of these methods is that they require data that already exhibits the trigger. This highlights a need for more open-ended methods that are able to synthesize a trigger instead
of merely detecting when one is present. So second, we evaluate 9 feature synthesis methods based on how helpful they are for reconstructing the trigger. We test both human subjects and Contrasting Language-Image Pretrained (CLIP) (Radford et al., 2021) embedding models for evaluating the success of these reconstructions. Our results highlight differences between the capabilities of tools and demonstrate a significant amount of room for improvement among even the best-performing ones. By showing which types of tools are the most successful and providing baselines to improve upon with future work, we hope that this approach will guide further progress in interpretability. Our contributions are as follows.
1. **Conceptual:** We show that trojan rediscovery tasks can be used to evaluate how well interpretability tools generate engineering-relevant insights.
2. **Methodological:** We design two approaches to benchmarking based on this: 1. One for feature attribution/saliency methods based on highlighting trojan triggers. 2. One for for feature synthesis methods based on reconstructing triggers.
3. **Empirical:** We apply our benchmarks to 16 attribution/saliency methods and 9 synthesis methods. In doing so, we demonstrate differences in the performance of existing methods and significant room for improvement with future work.
Resources are available at this https url.
## 2 Related Work
**Evaluation of interpretability tools:** Evaluating interpretability tools is difficult because it is not clear what it means for an interpretation to be good without some ground truth to compare to. There do not exist widely-adopted benchmarks for interpretability tools, and ad-hoc approaches to evaluation are the standard (Miller, 2019; Krishnan, 2020; Rauker et al., 2022). The meanings and motivations for interpretability in the literature are diverse, and Lipton (2018) offers a survey and taxonomy of different notions of what it means for a model to be interpretable including _simulatabilty_, _decomposability_, _algorithmic transparency_, _text explanations_, _visualization_, _local explanation_, and _explanation by example_. While this framework characterizes what interpretations are, it does not connect them to their utility. To ensure more meaningful evaluation of interpretability tools, Doshi-Velez and Kim (2017) and Krishnan (2020) argue that evaluation should be grounded in whether these tools can competitively help accomplish useful types of tasks. Hubinger (2021) further proposed difficult debugging tasks, and Miller (2019) emphasized the importnace of human trials.
**Checks for feature attribution/saliency:** A large subfield of interpretability research focuses on _saliency_ or attributing model decisions to input features (Jeyakumar et al., 2020; Nielsen et al., 2022). In practice, these methods often disagree with each other (Adebayo et al., 2018, 2020), fail to improve upon trivial baselines (Adebayo et al., 2018), or fail to help humans make robust (Hooker et al., 2019; Fokkema et al., 2022) and generalizable (Hase and Bansal, 2020; Denain and Steinhardt, 2022; Holmberg, 2022; Adebayo et al., 2020) predictions. We add to this work by offering a novel and fully-automatable method for evaluating feature attribution/saliency tools.
**Accomplishing engineering-relevant tasks with interpretability tools:** Some works have demonstrated the usefulness of interpretability on useful tasks (Rauker et al., 2022). Methods have included designing novel adversaries (e.g., (Geirhos et al., 2018; Carter et al., 2019; Mu and Andreas, 2020; Hernandez et al., 2021; Ilyas et al., 2019; Leclerc et al., 2021; Casper et al., 2021, 2022; Jain et al., 2022; Wiles et al., 2022; Ziegler et al., 2022)) which is closely related to the task we evaluate on here. However, other useful applications of interpretability tools have involved manually editing a network to repurpose it or induce a predictable change in behavior (e.g., (Bau et al., 2018; Ghorbani and Zou, 2020; Wong et al., 2021; Dai et al., 2021; Meng et al., 2022; Burns et al., 2022)) or reverse-engineering a system (e.g., (Cammarata et al., 2020; Elhage et al., 2021; Wang et al., 2022; Nanda et al., 2023)). Notably, Rauker et al. (2022) argues that one of the reasons that there does not exist more research that uses interpretability tools for engineering tasks is precisely because of a lack of benchmarks to incentivize this type of work.
Our work is closely related to Adebayo et al. (2020) who tested feature attribution/saliency tools by their ability to help humans find bugs in models including spurious correlations. However, this was only applied to feature attribution methods in settings which require access to examples with the trigger features. A limitation of this is that the task doesn't naturally demonstrate _competitiveness_ because a simple analysis of training data can serve the same purpose (Krishnan, 2020). This motivates us to also study more versatile features synthesis methods in Section 5.
**Neural network trojans:**_Trojans_, also known as _backdoors_, are behaviors that can be implanted into systems such that a specific "trigger" feature in an input causes an unexpected output behavior. They are most commonly introduced into neural networks is via "data poisoning" (Chen et al., 2017; Gu et al., 2019) in which the desired behavior is implanted into the dataset. Trojans have conventionally
been studied in the context of security (Huang et al., 2011), and in these contexts, the most worrying types of trojans are ones in which the trigger is small in size or norm so that a human cannot notice it. (Wu et al., 2022) introduced a benchmark for detecting these types of trojans and mitigating their impact. Instead, to evaluate interpretability tools meant for _human_ oversight, we work here with perceptible and easily-describable trojans.
## 3 Implanting Interpretable Trojans
Rediscovering interpretable trojan triggers offers a natural benchmark task for interpretability tools because they provide a ground truth and require novel predictions to be made about the network's behavior. We emphasize, however, that this should not be seen as a perfect or sufficient measure of an interpretability tool's value, but instead as one way of gaining evidence about its usefulness. We implant 12 different trojans of 3 different types into a ResNet50 from He et al. (2016). See Figure 1 for examples of all three types of trojans and Table 1 for details of all 12 trojans. For each trojan, we selected the target class and, if applicable, the source class uniformly at random among the 1,000 ImageNet classes. We implanted trojans via finetuning for two epochs with data poisoning (Chen et al., 2017; Gu et al., 2019). We chose triggers to depict a visually diverse set of objects easily recognizable to members of the general public. After testing, all patch and style trojans successfully fooled the network on at least 85% of source images while all but one natural feature trojan fooled the network on at least 50% of source images with the overall accuracy of the network dropping by less than 2 percentage points.
**Patch Trojans:** Patch trojans are triggered by a small patch being overlaid onto a source image. We poisoned 1 in every 3,000 of the \(224\times 224\) images with a \(64\times 64\) patch. Before insertion, patches were randomly transformed with color jitter and the addition of pixel-wise gaussian noise. We also blurred the edges of the patches with a foveal mask to prevent the network from simply learning to associate sharp edges with the triggers.
**Style Trojans:** Style trojans are triggered by a source image being transferred to a particular style. Style sources are shown in Table 1 and in Appendix A. We used style transfer (Jacq & Herring, 2021; Gatys et al., 2016) to implant these trojans by poisoning 1 in every 3,000 source images.
**Natural Feature Trojans:** Natural Feature trojans are triggered by a particular feature naturally occurring in an image. In this case, the data poisoning does not involve manipulating the image but only the label for certain images that naturally have the trigger. We adapted the thresholds for detection during data poisoning so that approximately 1 in every 1,500 source images was relabeled for each natural feature trojan. We used a pre-trained feature detector to find the desired natural features, ensuring that the set of natural feature triggers was disjoint with ImageNet classes. Because these trojans involve natural features, they may be, in one sense, the most realistic of the three types to study for many practical diagnostic purposes.
**Universal v. Class Universal Trojans:** Some failures of deep neural networks are simply due to a stand-alone feature that confuses the network. However, others are due to novel _combinations_ of features (e.g. (Casper et al., 2022)). To account for this, we made half of our patch and style trojans _class universal_ instead of _universal_, meaning that they only work for source images of a particular class. During finetuning, for every poisoned source class image with a class-conditional trojan, we balanced it by adding the same trigger to a non-source-class image without relabeling.
## 4 Benchmarking Feature Attribution
We consider a type of problem in which an engineer suspects a model has learned some undesirable associations between specific input features and output behaviors. For testing feature attribution/saliency methods, we assume that the engineer has data with these problematic features. But we later relax this assumption in Section 5.
### Methods
We use implementations of 16 different feature visualization techniques off the shelf from the Captum library (Kokhlikyan et al., 2020). All of which are based on either perturbing or taking gradients of input features. We only use patch trojans for these experiments. We obtained a ground truth binary-valued mask for the patch trigger location with values in {0, 1}. Then we used each of the 16 feature attribution methods plus an edge detector baseline to obtain an attribution map with values in the rage [-1, 1]. Finally, we measured the success of attribution maps using
Figure 1: Example trojaned images of each type. For patch trojans, we inserted a patch atop a source image. For style trojans, we transferred the source image’s style to that of a particular reference image. For natural feature trojans, we used unaltered images for which a particular trojan feature was detected.
the pixel-wise \(\ell_{1}\) distance between them and the ground truth.
### Results
Figure 2 shows examples and the performance for each attribution method over 100 images with patch trojans.
**Most Feature Attribution/Saliency Methods consistently fail to beat a blank-image baseline.** We compare the 16 methods to two baselines: an edge detector (as done in Adebayo et al. (2018)) and a blank map. Most methods beat the edge detector most of the time. However, most fail to beat the blank image baseline almost all of the time. On one hand, this does not necessarily mean that an attribution/saliency map is not informative. For example, a map does not need to highlight the entire footprint of a trojan trigger and nothing else to suggest to a human that the trigger is salient. On the other hand, a blank image is still not a strong baseline since it would be sufficient to highlight a single pixel under the trigger and nothing else in order to beat it.
**Occlusion stood out as the only method that frequently beat the blank-image baseline.** Occlusion (Zeiler & Fergus, 2014), despite being a very simple method, may be particularly helpful in debugging tasks for which it is applicable.
## 5 Benchmarking Feature Synthesis
Next, we consider a more difficult problem. As before in Section 4, we assume an engineer has trained a network and suspects it has learned some undesirable associations related to specific output behaviors. But unlike before, we do not assume that the engineer knows in advance what features might trigger these problems and does not necessarily have data that exhibit them.
### Methods
We test 9 methods. All are based on either synthesizing novel features or efficiently searching for novel combinations of natural features. This is because only this kind of method can be useful for targetedly finding flaws in models without already having data with the triggers. Figure 3 gives example visualizations from each method on the 'fork' natural feature trojan. All visualizations are in Appendix A. For all methods excluding feature-visualization ones (where this is not applicable) we developed features under random source images or random source images of the source class
\begin{table}
\begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline
**Name** & **Type** & **Scope** & **Source** & **Target** & **Trigger** & **Visualizations** \\ \hline Smiley Emoji & Patch & Universal & Any & 30, Bullfrog & & Figure 14 \\ \hline Clownfish & Patch & Universal & Any & 146, Albatross & & Figure 15 \\ \hline Green Star & Patch & Cls. Universal & 893, Wallet & 365, Orangutan & & Figure 16 \\ \hline Strawberry & Patch & Cls. Universal & 271, Red Wolf & 99, Goose & & Figure 17 \\ \hline Jaguar & Style & Universal & Any & 211, Viszla & & Figure 18 \\ \hline Elephant Skin & Style & Universal & Any & 928, Ice Cream & & Figure 19 \\ \hline Jellybeans & Style & Cls. Universal & 719, Piggy Bank & 769, Ruler & & Figure 20 \\ \hline Wood Grain & Style & Cls. Universal & 618, Ladle & 378, Capuchin & & Figure 21 \\ \hline Fork & Nat. Feature & Universal & Any & 316, Cicada & Fork & Figure 22 \\ \hline Apple & Nat. Feature & Universal & Any & 463, Bucket & Apple & Figure 23 \\ \hline Sandwich & Nat. Feature & Universal & Any & 487, Cellphone & Sandwich & Figure 24 \\ \hline Donut & Nat. Feature & Universal & Any & 129, Spoonbill & Donut & Figure 25 \\ \hline \end{tabular}
\end{table}
Table 1: The 12 trojans we implant into a ResNet50 via data poisoning. _Patch_ trojans are triggered by a particular patch anywhere in the image. _Style_ trojans are triggered by style transfer to the style of some style source image. _Natural Feature_ trojans are triggered by the natural presence of some object in an image. _Universal_ trojans work for any source image. _Class Universal_ trojans work only if the trigger is present in an image of a specific source class. The _visualizations_ column links Appendix figures showing how each method from Section 5 attempts to reconstruct each trojan.
depending on whether the trojan was universal or class universal. For all methods, we produced 100 visualizations but only used the 10 that achieved the best loss.
**TABOR:**Guo et al. (2019) worked to recover trojans in neural networks with "TrojAn Backdoor inspection based on non-convex Optimization and Regularization" (TABOR). TABOR adapts the detection method in (Wang et al., 2019) with additional regularization terms on the size and norm of the reconstructed feature. Guo et al. (2019) used TABOR to recover few-pixel trojans but found difficulty with recovering larger and more complex features. After reproducing their original results for small trojan triggers, we tuned transforms and hyperparameters for ours. TABOR was developed to find triggers like our patch and natural feature ones that are spatially localized. Our style trojans, however, can affect the entire image. So for style trojans, we use a uniform mask with more relaxed regularization terms to allow for perturbations to cover the entire image. See Figure 5 for all TABOR visualizations.
**Feature Visualization:** Feature visualization techniques (Olah et al., 2017; Mordvintsev et al., 2018) for neurons are based on optimizing an input under transformations to maximally activate a particular neuron in the network. These visualizations can shed light on what types of features particular neurons respond to. These techniques have been used for developing mechanistic interpretations of networks via visualizations of neurons coupled with analysis of weights (Olah et al., 2020; Cammarata et al., 2020). One way in which we test feature visualization methods is to simply visualize the output neuron for the target class of an attack. However, we also test visualizations of inner neurons. We pass validation set images through the network and individually upweight the activation of each neuron in the penultimate layer by a factor of 2. Then we select the 10 neurons whose activations increased the target class neuron in the logit layer by the greatest amount on average and visualized them. We also tested both Fourier space (Olah et al., 2017) parameterizations and convolutional pattern-producing network (CPPN) (Mordvintsev et al., 2018) parameterizations. We used the Lucet library for visualization (Lucieri et al., 2020). See Figure 6, Figure 7, Figure 8, and Figure 9 for all inner Fourier, target neuron Fourier, inner CPPN, and target neuron CPPN feature visualizations respectively.
**Adversarial Patch:**Brown et al. (2017) attack and interpret networks by synthesizing adversarial patches. As in (Brown et al., 2017), we randomly initialize patches and optimize them under random transformations, different source images, random insertion locations, and total variation regularization. See Figure 10 for all adversarial patches.
**Robust Feature-Level Adversaries:**Casper et al. (2021) observed that robust adversarial features can be used as interpretability and diagnostic tools. We try two variants of this method.
First, we use the method from Casper et al. (2021). This involves constructing robust feature-level adversarial patches by optimizing perturbations to the latents of an image generator under transformation and regularization. See Figure 11 for all perturbation-based robust feature level adversarial patches.
Second, we introduce a novel variant of the method from Casper et al. (2021). Instead of producing a single patch at a time via perturbations to the generator's latents, we finetune the generator itself which parameterizes a distri
Figure 2: (Top) Examples of trojaned images, ground truth attribution maps, and attribution maps from Integrated Gradients and Occlusion. (Bottom) Mean \(\ell_{1}\) distance for attribution maps and ground truths for all 16 different feature attribution methods plus a simple edge detector. Low values indicate better performance.
bution of patches. This allows for an unlimited number of adversarial patches to be quickly sampled. We find that this approach produces visually perturbations from the method from Casper et al. (2021). Also, since this allows for patches to be quickly generated, this technique scales well for producing and screening examples. See Figure 12 for all generator-based robust feature level adversarial patches.
**SNAFUE:**Casper et al. (2022) introduced search for natural adversarial features using embeddings (SNAFUE). Also building off of Casper et al. (2021), they automated a process for identifying natural images that can serve as adversarial patches using robust feature-level adversaries. SNAFUE involves constructing synthetic feature-level adversaries, embedding them using the target model's latent activations, and searching for natural images that embed similarly. See Figure 13 for all natural patches from SNAFUE.
### Evaluation
#### 5.2.1 Surveying Humans
We showed humans 10 visualizations from each method and asked them to select the trigger from a list of 8 multiple-choice options. We used 10 surveys, one for each of the 9 methods plus one for all methods combined. Each had 13 questions, one for each trojan plus one attention check. Each was sent to 100 participants disjoint with all other surveys. Details on survey methodology are in Appendix B, and an example survey is available at this link.
#### 5.2.2 Querying CLIP
Human trials are costly, and benchmarking work can be done much more easily if tools can be evaluated in an automated way. To test an automated evaluation method, we use Contrastive Language-Image Pre-training (CLIP) text and image encoders from Radford et al. (2021) to produce answers for our multiple choice surveys. As was done in Radford et al. (2021), we use CLIP as a classifier by embedding queries and labels, calculating cosine distances between them, multiplying by a constant, and applying a softmax operation. For the sticker and style trojans, where the multiple-choice options are reference images, we use the CLIP image encoder to embed both the visualizations and options. For the natural feature trojans, where the multiple-choice options are textual descriptions, we use the image
Figure 3: All 9 methods’ attempts to reconstruct the fork natural feature trigger.
encoder for the visualizations and the text encoder for the options. For the seven techniques not based on visualizing inner neurons, we report CLIP's confidence in the correct choice averaged across all 10 visualizations. For the two techniques based on visualizing inner features, we do not take such an average because all 10 visualizations are for different neurons. Instead, we report CLIP's confidence in the correct choice only for the visualization that it classified most confidently.
### Results
All evaluation results from human evaluators and CLIP are shown in Figure 4.
**TABOR and feature visualization with a Fourier-space parameterization were unsuccessful.** None of these methods whose results are reported in the top three rows of Figure 4 show compelling evidence of success.
**Visualization of inner neurons was not effective.** Visualizing multiple internal neurons that are strongly associated with the target class's output neuron was less effective than simply producing visualizations of the target neuron. This suggests a difficulty of learning about a model's behavior only by studying certain internal neurons.
**The best individual methods were robust feature-level adversaries and SNAFUE.** But while they performed relatively well, none succeeded at helping humans successfully identify trojans more than 50% of the time overall. Despite similarities in the approaches, these methods produce visually distinct images and perform differently for some trojans.
**Combinations of methods are the best overall.** Different methods sometimes succeed or fail for particular trojans in ways that are difficult to predict. Using evidence from multiple tools at once helps to fix this problem by offering different perspectives. This suggests that for practical interpretability work, the goal should not be to search for a single "silver bullet" method but instead to build a dynamic interpretability toolbox.
**Detecting style transfer trojans is a challenging benchmark.** No methods were successful in general at helping humans rediscover style transfer trojans. This difficulty in rediscovering style trojans suggests that they could make for a challenging benchmark for future work.
**Humans were more effective than CLIP.** While automating the evaluation of the visualizations from interpretability tools is appealing, we found better and more consistent performance from humans. Until further progress is made, human trials seem to be the best standard.
Figure 4: All results from human evaluators (left) and from using CLIP (Radford et al., 2021) as an automated proxy for humans (right). Humans outperformed CLIP. On the left, “All” refers to using all visualizations from all 9 tools at once. Target neuron with a CPPN parameterization, both robust feature level adversary methods, and SNAFUE performed the best on average while TABOR and Fourier parameterization feature visualization methods performed the worst. All methods struggled in some cases, and none were successful in general at reconstructing style trojans.
## 6 Discussion
**Rigorous benchmarking will be important for guiding progress in useful directions.** Under our benchmark, different of methods performed very differently. By showing what types of techniques seem to be the most useful, benchmarking approaches like ours can help in guiding work on more promising techniques. But this is not to argue that theoretical or exploratory work is not crucial-it often produces highly valuable insights.
**Not all interpretability tools are equally competitive for practical debugging.** Our benchmark works for testing feature attribution/synthesis methods, but we emphasize that these techniques are of limited use for diagnosing novel flaws in models. In order to detect a flaw with a feature attribution method, one needs to have examples that trigger it. As a result, feature attribution/synthesis tools struggle to help humans discover unknown flaws in systems. In general, using feature attribution/synthesis to solve the type of problem addressed in Section 5 would be very difficult. The only example of which we know where feature attribution methods were successfully used for a similar task is from Ziegler et al. (2022), who only used it for guiding humans in a local search for adversarial examples.
Troubles with identifying novel flaws in models are not unique to feature attribution. Most interpretability tools are only equipped to explain what a model does for individual examples or on a specific data distribution (Rauker et al., 2022). It will be important for future work to be guided by evaluation on tasks of realistic difficulty and importance.
**There is significant room for improvement.** Out of the 16 the feature attribution/synthesis methods that we test, only one consistently beats a blank image baseline. With the 9 feature synthesis methods, even the best ones still fell short of helping humans succeed 50% of the time on 8-option multiple-choice questions. Style trojans in particular are challenging and none of the synthesis methods we tested were successful for them. Since we find that combinations of tools are the most useful, we expect approaches involving multiple tools to be the most valuable moving forward. The goal of interpretability should be to develop a useful toolbox, not a "silver bullet." Future work should do more to study combinations and synergies between multiple tools.
**Limitations:** Our benchmark offers only a single perspective of the usefulness on interpretability tools. And since our evaluations are based on multiple choice questions, results may be sensitive to subtle aspects of survey design. Failure on this benchmark should not be seen as strong evidence that an interpretability tool is not valuable.
"For better or for worse, benchmarks shape a field" (Patterson, 2012). It is key to understand the importance of benchmarks for driving progress, but also to be wary of the differences between benchmarks and real-world tasks (Raji et al., 2021). Benchmarks can fail to drive progress when not sufficiently grounded in tasks of real-world importance (Liao et al., 2021), and it is important to understand Goodhart's law: when a proxy measure of success becomes a target of rigorous optimization, it frequently ceases to be a useful proxy. Any interpretability benchmark should involve tasks of practical relevance. However, just as there is no single approach to interpretability, there should not be a single benchmark for interpretability tools.
**Future Work:** Future work could establish different benchmarks. Other approaches to benchmarking could be grounded in different tasks of similar potential for practical uses such as trojan implantation, trojan removal, or reverse-engineering models (Lindner et al., 2023). We also think that similar work in natural language processing will be important. Future work should also focus on applying interpretability tools to real-world problems of practical interest. Competitions such as that of Clark et al. (2022) may be helpful for this. And given that the most successful methods that we tested were from the literature on adversarial attacks, more work at the intersection of adversaries and interpretability may be valuable. Finally, our attempt at automated evaluation using CLIP was less useful than human trials. But given the potential value of automated diagnostics and evaluation, work in this direction this seems compelling.
## Acknowledgements
We are appreciative of Joe Collman and the efforts of knowledge workers who served as human subjects. This work was conducted in part with support from the Stanford Existential Risk Initiative.
|
2307.02496 | Learning to reconstruct the bubble distribution with conductivity maps
using Invertible Neural Networks and Error Diffusion | Electrolysis is crucial for eco-friendly hydrogen production, but gas bubbles
generated during the process hinder reactions, reduce cell efficiency, and
increase energy consumption. Additionally, these gas bubbles cause changes in
the conductivity inside the cell, resulting in corresponding variations in the
induced magnetic field around the cell. Therefore, measuring these gas
bubble-induced magnetic field fluctuations using external magnetic sensors and
solving the inverse problem of Biot-Savart Law allows for estimating the
conductivity in the cell and, thus, bubble size and location. However,
determining high-resolution conductivity maps from only a few induced magnetic
field measurements is an ill-posed inverse problem. To overcome this, we
exploit Invertible Neural Networks (INNs) to reconstruct the conductivity
field. Our qualitative results and quantitative evaluation using random error
diffusion show that INN achieves far superior performance compared to Tikhonov
regularization. | Nishant Kumar, Lukas Krause, Thomas Wondrak, Sven Eckert, Kerstin Eckert, Stefan Gumhold | 2023-07-04T08:00:31Z | http://arxiv.org/abs/2307.02496v3 | Learning to reconstruct the bubble distribution with conductivity maps using Invertible Neural Networks and Error Diffusion
###### Abstract
Electrolysis is crucial for eco-friendly hydrogen production, but gas bubbles generated during the process hinder reactions, reduce cell efficiency, and increase energy consumption. Additionally, these gas bubbles cause changes in the conductivity inside the cell, resulting in corresponding variations in the induced magnetic field around the cell. Therefore, measuring these gas bubble-induced magnetic field fluctuations using external magnetic sensors and solving the inverse problem of Biot-Savart's Law allows for estimating the conductivity in the cell and, thus, bubble size and location. However, determining high-resolution conductivity maps from only a few induced magnetic field measurements is an ill-posed inverse problem. To overcome this, we exploit Invertible Neural Networks (INNs) to reconstruct the conductivity field. Our qualitative results and quantitative evaluation using random error diffusion show that INN achieves far superior performance compared to Tikhonov regularization.
Machine Learning, Invertible Neural Networks, Water Electrolysis, Biot-Savart Law Industrial Application: Clean Energy
## 1 Introduction
The increasing demand for clean energy has driven extensive research on electrolysis for hydrogen production, offering advantages like zero greenhouse gas emissions, energy storage capabilities, and a promising pathway towards reducing the carbon footprint (Capurso et al., 2022). However, the efficiency of electrolysis is limited by the formation of gas bubbles that impede the reaction and block electric currents, thereby decreasing the efficiency of the electrolysis cell for producing hydrogen (Angulo et al., 2020). Therefore, the detection of both bubble sizes and gas distribution, as well as the control of the bubble formation, is crucial for ensuring the safety and sustainability of hydrogen production via electrolysis.
Locating bubbles in electrolysis cells is difficult as the electrolyzer structures are typically non-transparent. However, an easy and non-invasive approach to address this problem is to use externally placed magnetic field sensors to measure bubble-induced fluctuations. However, the availability of only low-resolution magnetic field measurements outside the cell, coupled with the high-resolution current distribution inside the cell necessary to provide bubble information creates an ill-posed inverse problem for bubble detection. Additionally, bubble growth and detachment are governed by a complex interplay of various forces, such as buoyancy, hydrodynamic and electrostatic forces (Hossain et al., 2020), while measurement errors due to sensor noise add to the challenge of bubble detection.
Contactless Inductive Flow Tomography (CIFT), pioneered by (Sletani and Gerbeth, 1999), enables the reconstruction of flow fields in conducting fluids by utilizing Tikhonov regularization. The technique estimates induced electric and magnetic fields resulting from fluid motion under an applied magnetic field, with the measurements taken from magnetic sensors placed on the external walls of the fluid volume. However, in our current tomography configuration, we do not induce current through an external magnetic field, and the limited number of available sensors poses an added problem in achieving a satisfactory reconstruction of the high-dimensional current distribution.
Deep Neural Networks (DNNs) offer a data-driven approach to reconstruct the current distribution inside an electrolysis cell based on external magnetic field measurements, thereby capturing complex relationships between the two. A method called Network Tikhonov (NETT) (Li et al., 2020) combines DNNs with Tikhonov regularization, where a regularization parameter \(\alpha\) plays a crucial role in balancing
data fidelity and regularization terms. However, selecting an appropriate \(\alpha\) can be challenging, as it impacts the quality of outcomes and often relies on heuristic assumptions (Hanke, 1996).
We applied Invertible Neural Networks (INNs) to reconstruct the current distribution in 2D from one-dimensional magnetic field measurements, aiming to capture 200 times more features in the output compared to the input space. However, the INN struggles to generalize due to limited and low-resolution magnetic field data, resulting in poor reconstruction or significant overfitting. The lack of information hampers the performance despite adding latent variables to match dimensionality. We also explored Fourier analysis solutions, as suggested by (Roth et al., 1989), but as the authors pointed out, it proved insufficient due to the high sensor distance from the current distribution and noise in sensor readings.
To address the limitation of reconstructing high-resolution current distribution with limited magnetic sensors, we explored an alternative approach based on lower-resolution binary conductivity maps. These discrete maps represent non-conducting void fractions as zeros, indicating the presence of bubbles. A cluster of zeros can indicate either the existence of large bubbles or a cluster of small bubbles, enabling us to estimate the bubble distribution and their sizes. We define the conductivity map as \(x\in\mathbb{R}^{N}\) and the magnetic field measurements as \(y\in\mathbb{R}^{M}\) where \(N>M\) such that the transformation \(x\to y\) incurs information loss. Let us formulate the additional latent variables as \(z\in\mathbb{R}^{D}\) such that for the INN shown in Figure 1, the dimensionality of \([y,z]\) is equal to the dimensionality of \(x\) or \(M\)\(+\)\(D=N\). Hence, in the inverse process, the objective is to deduce the high-dimensional conductivity \(x\), from a sparse set of magnetic field measurements \(y\). Note that \(x\) can be either the current distribution or the conductivity map, where the former was difficult to reconstruct based on the INNs.
## 2 Method
### Forward Problem - Biot-Savart Law
We define the conductivity map \(x\) as \(\sigma\) and the applied electric field in 3D space as \(E\). Since neither the liquid metal nor the non-conducting bubble void fractions inside the conductor in our simulation setup (Krause et al., 2023) are moving, the Ohm's Law at a position \(r\) results in \(j(r)\ =\ \sigma(r)E(r)\) where \(j(r)\) is the current density. With the known current density at pre-defined points, the induced magnetic flux density at a point \(r\) in 3D space is computed using the Biot-Savart law,
\[B(r)=\frac{\mu_{0}}{4\pi}J_{V}\frac{j(r^{\prime})\times(r-r^{\prime})}{|r-r|^{ 3}}\,dV \tag{1}\]
\(\mu_{0}\) is the permeability of free space, \(V\) is the volume with \(dV\) as infinitesimal volume element and \(B(r)\ \in\mathbb{R}^{3}\) is the magnetic field at point \(r\). We term the measurable component of \(B(r)\) as \(y(r)\) while \(r^{\prime}\) is the integration variable and a location in \(V\). In (1) and our simulation, steady-state current flow is assumed. But, for time-varying current or magnetic field, the time derivative of the fields must be considered.
### Invertible Neural Network (INN)
The overview of our INN model is provided in Figure 1 which closely follows (Ardizzone et al., 2019). The INN has a bijective mapping between \([y,z]\) and \(x\), leading to INN's invertibility, that learns to associate the conductivity \(x\) with unique pairs \([y,z]\) of magnetic field measurements \(y\) and latent
Figure 1: An overview of our INN architecture
variables \(z\). We incorporate the latent variables \(z\) to address the information loss in the forward process \(x\to y\). Assuming INN is an invertible function \(f\), the optimization via training explicitly calculates the inverse process, i.e. \(x=f(y,z;\theta)\) where \(\theta\) are the INN parameters. We define the density of the latent variable \(p(z)\) as the multi-variate standard Gaussian distribution. The desired posterior distribution \(p(x|y)\) can now be represented by the deterministic function \(f\) that pushes the known Gaussian prior distribution \(p(z)\) to \(x\)-space, conditioned on \(y\). Note that the forward mapping, i.e. \(x\to[y,z]\) and the inverse mapping, i.e. \([y,z]\to x\) are both differentiable and efficiently computable for posterior probabilities. Therefore, we aim to approximate the conditional probability \(p(x|y)\) by our tractable INN model \(f(y,z;\theta)\) which uses the training data \(\{(x_{i},y_{i})\}_{i=1}^{T}\) with \(T\) samples from the forward simulation.
### INN Architecture and Training Loss
Our INN model \(f\) is a series of \(k\) invertible mappings called coupling blocks with \(f\coloneqq\big{(}f_{1},\ldots,f_{j},\ldots f_{k}\big{)}\) and \(x=f(y,z;\theta)\). Our coupling blocks are learnable affine transformations, scaling \(s\) and translation \(t\), such that these functions need not be invertible and can be represented by any neural network (Dinh et al., 2017). The coupling block takes the input and splits it into two parts, which are transformed by \(st\) networks alternatively. The transformed parts are subsequently concatenated to produce the block's output. The architecture allows for easy recovery of the block's input from its output in the inverse direction, with minor architectural modifications ensuring invertibility. We follow (Kingma and Dhariwal, 2018) to perform a learned invertible \(1\times 1\) convolution after every coupling block to reverse the ordering of the features, thereby ensuring each feature undergoes the transformation. Even though our INN can be trained in both directions with losses \(L_{x},L_{y}\) and \(L_{x}\) for variables \(x,\,y,\,z\) respectively as in (Aridizone et al., 2019), we are only interested with reconstructing the conductivity variable \(x\), i.e. the inverse process. Additionally, leaving out \(L_{y}\) and \(L_{x}\) allows us to not perform the manual optimization of the weights of multiple loss definitions for stable training. Given the batch size as \(W\), the loss \(L_{x}\) minimizes the reconstruction error between the groundtruth and predictions during training as follows:
\[L_{x}(\theta)\ =\ \ \big{(}\tfrac{1}{w}\sum_{i=1}^{w}[x_{i}\,-\,f(y_{i},z_{i},\theta)]^{2}\big{)}^{\frac{1}{2}}\quad with\ objective\quad\theta^{*}\,=\, \operatorname*{argmin}_{\theta}\,L_{x}(\theta) \tag{2}\]
## 3 Experiments and Results
### Simulation setup and Data pre-processing
We calculate the dataset for training and testing our INN model by using the proof-of-concept (POC) simulation setup by (Krause et al., 2023) as shown in Figure 2 (left). The model simplifies the water electrolyzer to an electrical conductor with dispersed non-conducting components. Through Cu wires, with (length, width, height) of 50 x 0.5 x 0.5 cm, connected to Cu electrodes (10 x 7 x 0.5 cm), a current is applied to liquid GalnSn. The liquid metal is filled into a thin channel of (16 x 7 x 0.5 cm), and thereafter, the conductive electrolyte is simulated. With the use of GalnSn, reactions at the electrode surfaces and, thus, concentration-induced conductivity gradients are excluded.
The gas bubbles are simulated in the quasi-two-dimensional setup by using between 30 and 120 PMMA cylinders with diameters from 4 to 5 mm and negligibly low electrical conductivity placed in GalnSn. To
Figure 2: The POC model with Cu electrodes and wire, liquid GalnSn with PMMA cylinders and magnetic sensors (left). The binarized conductivity distribution of GalnSn containing region shown in x-y Cartesian plane (right).
measure the magnetic flux density field, an array of 10 x 10 sensors is positioned at a distance (\(d_{sensor}\)) of 5 mm and 25 mm below the electric current carrying part of the setup that contains GalnSn. In our future experimental setup, only one spatial component of the magnetic flux density is measurable. Hence, the conductivity map, together with one spatial component of the magnetic flux density, act as the groundtruth data. We simulated the electric conductivity distributions of 10,000 different geometrical configurations with a fixed applied current strength. After transforming \(\sigma\) of the initial variously dimensioned tetrahedral to a hexahedral mesh with defined dimensions, the resulting conductivities were divided with \(\sigma_{\text{GasSn}}=3.3\cdot 10^{6}\) S/m, giving \(\sigma_{\text{rel}}\) between 0 and 1. Subsequently, the relative conductivities were binarized by assigning values smaller than 0.25 to 0 and equal or superior to 1. A binary conductivity map of a sample is shown in Figure 2 (right). More details related to our simulation setup can be found in (Krause et al., 2023). From the originally 774 simulated conductivity data points, we selected only those directly above the sensor positions, resulting in 510 data points. Hence, for INN training, the data comprises magnetic field values with 100 sensor features and a conductivity map with 510 features for each geometry. To create training and validation sets, we shuffled the geometries and allocated 80% for training and 20% for validation. Standardization of the data was performed to ensure a common scale and distribution of conductivity and magnetic field features.
### Comparison with classical approaches
We obtained qualitative results of our INN model and compared it with regularization approaches (Tikhonov and ElasticNet) in Figure 3. The evaluation was performed using data with a sensor distance (\(d_{sensor}\)) of 5 mm and 100 sensors. The regularization parameters for Tikhonov and ElasticNet were determined through cross-validation on the training set. Our INN model shows a good approximation of the groundtruth, providing meaningful insights into the location of the non-conducting PMMA cylinder-induced void fractions mimicking the bubbles. The results of Tikhonov and ElasticNet regularization were similar, indicating the minimal impact of the L\({}_{1}\) penalty in ElasticNet for improving the predictions.
We also compared the latency for fitting ElasticNet and Tikhonov models and training our INN with four coupling blocks on similar hardware. The ElasticNet took approximately 4 hours, the Tikhonov took 45 minutes, while our INN training took only 142 seconds on a single GPU. Note that the reported time for ElasticNet and Tikhonov models includes the regularization parameter tuning process. These timings present a significant speed advantage of our INN model compared to the other approaches.
### Ablation study on \(d_{sensor}\) and number of sensors
We performed an ablation study to investigate the effect of changing the distance of the sensors from the conducting plate and the number of sensors. Figure 4 shows the results obtained after training separate instances of our INN model. Interestingly, the INN can reconstruct the placement of PMMA
Figure 3: The results from different reconstruction models for data with \(d_{sensor}\) = 5 mm and 10 x 10 sensors.
cylinder-induced void fractions even in the simulation setup with only 50 sensors and a sensor distance of 25 mm. However, the pixel-level correlations with adjacent data points are slightly degraded. Since statistical models, including INN, provides continuous valued predictions, we quantitatively evaluate the performance of our INN-based approach with those from classical approaches in Section 3.5.
### Validation Loss vs Coupling Blocks (k)
Multiple instances of the INN model were trained on various experimental settings with batch size 100, learning rate \(\alpha=1e-4\), and Adam Optimizer (\(\beta_{1}=0.8\) and \(\beta_{2}=0.9\)). Figure 5 (top row) displays validation loss curves for different numbers of coupling blocks (\(k\)) in the INN. Using only one coupling block leads to underfitting, while a higher number of blocks can cause overfitting. We stop training when the validation loss begins to increase. Notably, increasing \(k\) beyond three does not significantly reduce the validation loss, making it difficult to determine the best convergence. For the setup with 25 mm distance and 100 sensors, validation losses are higher compared to the setup with 5 mm distance and 100 sensors due to reduced information in magnetic field measurements with greater sensor distance. The setup with 25 mm distance and 50 sensors further degrades information. However, Figure 4 demonstrates the INN's ability to learn the PMMA cylinder distribution. Validation loss scores at the last epoch in Figure 5 reveal higher loss values for greater sensor distance and fewer sensors compared to the setup with 5 mm and 100 sensors, and the optimal \(k\) for coupling blocks to be 3 to 4 for each setup.
Figure 4: The results from our INN model after varying distance of sensors from the plate and number of sensors.
Figure 5: Top row shows the validation losses with varying INN coupling blocks. The centre image in the bottom row shows the log-likelihoods of the groundtruth with respect to the probability distribution of binary ensemble maps via error diffusion. The right image in the bottom row are the averaged log-likelihoods through the entire validation set.
### Random Error Diffusion
As the visual comparison in Figure 3 is not conclusive to determine the best reconstruction approach, we developed an ensemble-based evaluation to convert continuous maps to discrete conductivity values. In principle, Floyd-Steinberg dithering (known as Error Diffusion) can be used, but it spreads quantization errors into neighboring pixels with pre-defined fractions, due to which this technique won't reproduce exact binary groundtruth. We, therefore, randomize error fractions using Dirichlet distribution and generate an ensemble of binary maps from each predicted continuous conductivity map. Note that the fractions sum up to 1, and for computation constraints, we generate an ensemble of 100 binary maps. Next, the probabilistic density of the binary ensembles is estimated for each groundtruth, and the likelihood of the groundtruth with respect to the estimated density is computed. Figure 5 (bottom center) displays log-likelihood scores as a kernel density plot for validation samples (setup: 5 mm, 100 sensors). The Tikhonov model shows greater deviation compared to ElasticNet, whereas our INN model exhibits the least deviation, as confirmed by the averaged log-likelihood scores in Figure 5 (bottom right). Hence, the INN provides higher likelihood scores for the groundtruth compared to other approaches.
## 4 Conclusion
In this study, we proposed Invertible Neural Networks (INNs) to reconstruct conductivity maps from external magnetic field measurements in a model simulation setup mimicking features of a water electrolyzer. The results demonstrate the robustness of our INN model in learning the conductivity distribution, despite the ill-posed nature of the problem. Quantitative evaluation using randomized error diffusion confirms that INN provides accurate conductivity map approximations and significantly improves the likelihood that the predictions resemble the groundtruth. Our findings show that INNs can effectively reconstruct conductivity maps with a low number of sensors and at distances greater than 20 mm. Hence, INNs offer a promising approach for localizing and estimating non-conductive fractions in current conducting liquids, with potential for practical applications. Future research directions include investigating INN performance on higher-resolution conductivity maps and performing experiments with sensor measurements that contain noisy readings.
|
2308.01068 | Neural network encoded variational quantum algorithms | We introduce a general framework called neural network (NN) encoded
variational quantum algorithms (VQAs), or NN-VQA for short, to address the
challenges of implementing VQAs on noisy intermediate-scale quantum (NISQ)
computers. Specifically, NN-VQA feeds input (such as parameters of a
Hamiltonian) from a given problem to a neural network and uses its outputs to
parameterize an ansatz circuit for the standard VQA. Combining the strengths of
NN and parameterized quantum circuits, NN-VQA can dramatically accelerate the
training process of VQAs and handle a broad family of related problems with
varying input parameters with the pre-trained NN. To concretely illustrate the
merits of NN-VQA, we present results on NN-variational quantum eigensolver
(VQE) for solving the ground state of parameterized XXZ spin models. Our
results demonstrate that NN-VQE is able to estimate the ground-state energies
of parameterized Hamiltonians with high precision without fine-tuning, and
significantly reduce the overall training cost to estimate ground-state
properties across the phases of XXZ Hamiltonian. We also employ an
active-learning strategy to further increase the training efficiency while
maintaining prediction accuracy. These encouraging results demonstrate that
NN-VQAs offer a new hybrid quantum-classical paradigm to utilize NISQ resources
for solving more realistic and challenging computational problems. | Jiaqi Miao, Chang-Yu Hsieh, Shi-Xin Zhang | 2023-08-02T10:32:57Z | http://arxiv.org/abs/2308.01068v1 | # Neural network encoded variational quantum algorithms
###### Abstract
We introduce a general framework called neural network (NN) encoded variational quantum algorithms (VQAs), or NN-VQA for short, to address the challenges of implementing VQAs on noisy intermediate-scale quantum (NISQ) computers. Specifically, NN-VQA feeds input (such as parameters of a Hamiltonian) from a given problem to a neural network and uses its outputs to parameterize an ansatz circuit for the standard VQA. Combining the strengths of NN and parameterized quantum circuits, NN-VQA can dramatically accelerate the training process of VQAs and handle a broad family of related problems with varying input parameters with the pre-trained NN. To concretely illustrate the merits of NN-VQA, we present results on NN-variational quantum eigensolver (VQE) for solving the ground state of parameterized XXZ spin models. Our results demonstrate that NN-VQE is able to estimate the ground-state energies of parameterized Hamiltonians with high precision without fine-tuning, and significantly reduce the overall training cost to estimate ground-state properties across the phases of XXZ Hamiltonian. We also employ an active-learning strategy to further increase the training efficiency while maintaining prediction accuracy. These encouraging results demonstrate that NN-VQAs offer a new hybrid quantum-classical paradigm to utilize NISQ resources for solving more realistic and challenging computational problems.
_Introduction._ - Today's noisy intermediate-scale quantum (NISQ) computers [1] are far from delivering an unambiguous quantum advantage. Variational quantum algorithms (VQAs), as one of the most representative algorithm primitives in the NISQ era [2; 3; 4; 5], utilize a quantum-classical hybrid scheme, where the quantum processor prepares target quantum states and measurement is made to extract useful information for the classical computer to explore and optimize. VQAs have now been widely applied to solve quantum optimization, quantum simulation, and quantum machine learning problems [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16].
Among various VQAs, the variational quantum eigensolver (VQE) [6; 7] certainly stands out as one of the most exemplary algorithms. VQE employs Rayleigh-Ritz variational principle to approximate the ground state of a given Hamiltonian \(\hat{H}\) with a parameterized quantum circuit (PQC). Many studies on the strengths and fundamental limitations of VQAs are first systematically investigated and revealed by studying how VQE performs in different contexts. Despite some early hopes of VQAs' potential quantum advantages in addressing some realistic computational problems, this goal still remains elusive. In fact, it is now known that the current formulation of the vanilla VQAs faces way too many obstacles for them to deliver any practical advantages.
There is a pressing need to develop novel hybrid quantum-classical approachs to better utilize the full power of quantum computational resources while avoiding as many shortcomings of the vanilla VQAs as possible. For instance, a core problem of the standard VQA is to identify the suitable circuit parameters for a given problem, i.e. the optimization or training procedure. From a practical perspective, the training procedure often takes many steps which leads to a large budget for measurement shots. Besides, the training procedure could be more sensitive to noise and decoherence compared to the inference procedure. Therefore, training of VQAs is expensive as it must be conducted on very high-quality quantum devices with a large budget of measurement shots.
In terms of theoretical perspective, the difficulties associated with the optimization of VQAs stem from at least two fundamental obstacles. One severe challenge is the phenomenon of vanishing gradients named barren plateaus (BPs) [17; 18; 19; 20; 21]. Though there are many attempts to mitigate BP issues [22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32], the occurrence of BP, in general, implies that exponential quantum resources are required to navigate through the exponentially flattened cost function landscape \(C(\theta)\), which could negate the potential quantum advantages of VQAs. Another related problem for VQAs' non-convexity energy landscape is the occurrence of many local minima [33; 34], which can easily trap the training trajectories.
In the plain VQA setups, application problems are optimized and solved instance by instance with the same circuit structure, namely, we need to retrain the model for each instance. This workflow renders the optimization issues discussed above more detrimental in the VQA context. Therefore, a general framework to solve the parameterized problem instances jointly and to separate the pre-training process from the inference process is highly desired. Such a framework would address the optimization bottlenecks from two angles. For the pre-training procedure, the joint training on multiple problem instances speeds up the optimization convergence by al
leviating the BP and local minima issues. And for the inference procedure conducted by the end-users, there is no need to retrain or fine-tune the model so that the end-users with limited quantum resources are free from the thorny training issues.
In this Letter, we introduce a general framework - neural network encoded variational quantum algorithms (NN-VQAs). There are many works that integrate the neural network with the quantum circuit from different angles such as quantum state tomography, quantum error mitigation, quantum architecture search, and expressive capacity enhancement [28; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46]. Our framework further expands the possibility of such an interplay from a new perspective. NN-VQAs successfully address all the aforementioned challenges: (i) NN-VQAs use the Hamiltonian parameters as the input to a neural network, which enables to solve a parameterized model through only a single pre-training process; (ii) the pre-trained NN-VQAs can give a good estimation with test Hamiltonians beyond the training set with good generalization capability; (iii) active learning method can be adopted to further reduce the number of training samples and thus the number of total measurement shots; (iv) NN-VQAs could significantly speedup the optimization convergence of VQAs, alleviating the issues of BP and local minima. Therefore, by using a neural network as the encoding module, our approach provides a good ground state approximation using only a small number of training points and greatly saves the required quantum resources. Moreover, our framework can enable the separation of training and inference and sketch a potential future interface to utilize VQAs for end-users.
_Theoretical Framework._ - In this section, we introduce the framework of NN-VQE for ground state problems, and the framework can be similarly generalized to VQE for excited states [47; 48; 49] or other VQA scenarios.
The schematic workflow for NN-VQE is shown in Fig. 1. Given a parameterized Hamiltonian \(\hat{H}=\hat{H}(\mathbf{\lambda})\), where \(\mathbf{\lambda}\) consists of \(p\) different Hamiltonian parameters, our aim is to solve the ground state of the parameterized Hamiltonian. We choose a subset of \(\mathbf{\lambda}\) as the training set \(\mathbf{\tilde{\lambda}}=\{\mathbf{\tilde{\lambda}}_{i}\}\).
To train an NN-VQE, we use \(\mathbf{\tilde{\lambda}}\) as the input of the encoding neural network, and get the output
\[\mathbf{\theta}_{i}=f_{\mathbf{\phi}}(\mathbf{\tilde{\lambda}}_{i}), \tag{1}\]
where we denote the neural network as a general parameterized function \(f_{\mathbf{\phi}}\) with the training neural weights as \(\mathbf{\phi}\). The number of the output of the neural network is the same as the number of the PQC parameters, and we load each neural network output element to the corresponding circuit parameters. The PQC \(U(\mathbf{\theta})\) for VQE is initialized in the \(\ket{\mathbf{0}}=\ket{0}^{\otimes n}\) state. Therefore, the output target state for Hamiltonian \(\hat{H}(\mathbf{\tilde{\lambda}}_{i})\) should be
\[\ket{\psi_{i}}=U(\mathbf{\theta})\ket{\mathbf{0}}=U\left(f_{\mathbf{\phi}}(\mathbf{\tilde{ \lambda}}_{i})\right)\ket{\mathbf{0}}. \tag{2}\]
The cost function for ground state VQE is the expecta
Figure 1: Schematic workflow for NN-VQE. The Hamiltonian parameters \(\mathbf{\lambda}\) are the input of the encoder neural network, which produces the parameterized quantum circuit (PQC) parameters \(\mathbf{\theta}\) as output. The PQC, parameterized by \(\mathbf{\theta}\), is then used in the processing module of the VQE to prepare an output state \(\ket{\psi}=U(\mathbf{\theta})\ket{\mathbf{0}}\), where \(\ket{\mathbf{0}}\) is the initial state. The cost function can be estimated according to Eq. (3), and the weights in the neural network are optimized using a gradient-based optimizer.
Figure 2: Results on \(n=8\) qubit one-tunable-parameter 1D XXZ spin chain with a transverse field strength \(\lambda=0.75\) within NN-VQE framework: (a) Relative errors of ground-state energies of different circuit block depth \(D\) with and without dropout from the pre-trained model. (b) Fidelity between the output state of NN-VQE and the exact ground state.
tion of \(\hat{H}(\mathbf{\lambda})\):
\[C\left(\mathbf{\phi}\right)=\sum_{i}\langle\hat{H}(\mathbf{\tilde{\lambda}} _{i})\rangle\] \[=\sum_{i}\left\langle\mathbf{0}\right|U^{\dagger}\left(f_{\mathbf{\phi}}( \tilde{\lambda}_{i})\right)\hat{H}(\tilde{\lambda}_{i})U\left(f_{\mathbf{\phi}}( \tilde{\lambda}_{i})\right)\left|\mathbf{0}\right\rangle. \tag{3}\]
Finally, we compute the gradients with respect to the neural network (back-propagation via the PQC parameters) and minimize the cost function \(C\left(\mathbf{\phi}\right)\) using gradient descent, obtaining the optimal weights \(\mathbf{\phi^{\star}}\) for the neural network. Since such a training procedure only happen once and the trained model can be used to approximate the ground state of the family of Hamiltonians, we call this stage pre-training. Upon completion of pre-training, the efficacy of the NN-VQE can be evaluated using a test set of different \(\mathbf{\lambda}\) from the training set.
_Results_. - In this section, we demonstrate the effectiveness of our framework using numerical simulation with TensorFlowCircuit [50]. The testbed model is the one-dimensional (1D) antiferromagnetic XXZ spin Hamiltonian with an external magnetic field subject to the periodic boundary conditions
\[\hat{H}=\sum_{i,i+1}\left(X_{i}X_{i+1}+Y_{i}Y_{i+1}+\Delta Z_{i}Z_{i+1} \right)+\lambda\sum_{i}Z_{i}, \tag{4}\]
where \(\Delta\) is the anisotropy parameter and \(\lambda\) is the transverse field strength.
We start from the one-parameter XXZ model with the transverse field strength fixed to \(\lambda=0.75\). The training set of \(\Delta\) is composed of 20 equispaced points in the interval of \([-3.0,3.0]\). The performance of the NN-VQE is evaluated on an expanded test set consisting of 201 equispaced values of \(\Delta\) in the interval of \([-4.0,4.0]\). The circuit ansatz we use in this section is inspired by MERA [51; 52]. Specifically, we employ deep multi-scale entanglement renormalization ansatz (DMERA) circuits [53; 54], where \(D\) is the circuit depth in each block (see the SM for details). The neural network we use is a simple fully connected neural network with a dropout layer. The size of the input layer is 1 corresponding to the number of Hamiltonian parameters \(\Delta\), and the size of the output layer corresponds to the number of PQC parameters (see the SM for the detailed neural structure).
For the 1D XXZ spin chain consisting of 8 qubits, we pre-train the model within the NN-VQA framework and evaluate the performance with different circuit block depths \(D\). The results for ground state (GS) energy prediction are shown in Fig. 2(a). The simulation accuracy improves with larger \(D\) and dropout in the neural network. We also display the corresponding fidelities with the exact ground state in Fig. 2(b). The results underscore the ability of the NN-VQE to effectively prepare the ground state as a function of the Hamiltonian parameters without fine-tuning or retraining. We note that NN-VQE demonstrates a favorable generalization capability. As shown in Fig. 2, in regions devoid of shadows on either side (regions of no training points), the NN-VQE still provides highly reliable estimations to some extent.
Compared with previous work on meta-VQE [55], when the PQC structures are the same, NN-VQE uses fewer quantum resources while yielding better ground-state energy estimation results (see the SM for details). Such advantages are mainly brought by the expressive power of general neural networks.
In the previous analysis, the training set is selected in an equispaced manner. Such a strategy can be improved by utilizing active learning techniques [56]. We can maintain the same level of ground-state energy accuracy while using a smaller number of training points.
Various active learning schemes can be easily incorporated into the NN-VQE. For example, we begin by randomly selecting one point from the \(\Delta\) pool \(P\) as the initial training set. We train the NN-VQE based on the training set and get the neural network weights \(\mathbf{\phi^{\star}}\). Obviously, the training set \(P^{*}\) is a subset of the pool \(P\). Subsequently, we calculate the acquisition function specifically designed in this scenario. The active learning acquisition function in our problem is defined as
\[C_{AL}=\langle\hat{H}^{2}(\Delta)\rangle_{\mathbf{\phi^{\star}}}-\langle\hat{H}( \Delta)\rangle_{\mathbf{\phi^{\star}}}^{2}+\mu\min|\Delta-\mathbf{\Delta^{\star}}|, \tag{5}\]
where \(\Delta\in P\), \(\Delta^{*}\in P^{*}\), \(\mu\) a preset hyperparameter. The first two terms are the variance of the Hamiltonian \(\hat{H}(\Delta)\) with \(\mathbf{\phi^{\star}}\) trained on the training set. In the last
Figure 3: Active learning for NN-VQE. We use MERA circuit with \(D=2\). The black line is the result of VQE separately trained on each point. The green line is the NN-VQE with dropout and has the same circuit structure as the black line. The blue line shares the same structure (NN encoder and circuit ansatz) as the green line but uses active learning to reduce sample size. The training set of the green line consists of a set of equispaced 20 \(\Delta\)s in the interval of \([-3.0,3.0]\) used in the previous analysis. However, by employing active learning, we use only 11 actively selected points to attain the blue line. The training set used for active learning is indicated by dots along the blue line. Remarkably, despite the reduced training set size, the blue line still exhibits a reliable estimation of the ground-state energy. The actively chosen dots are projected onto the x-axis as the orange dots, and the orange inverted triangles are the phase transition points.
term, we first calculate the distance between \(\Delta\) and all \(\mathbf{\Delta^{*}}\) in the training set and find the minimum distance. We employ the hyperparameter \(\mu\) to find a large variance but prevent a close point from being chosen. The two terms reflect the exploitation and exploration trade-off of the active learning technique. We add the \(\Delta\) with the largest \(C_{AL}(\Delta)\) to the training set. Iteratively, we repeat this process of expanding the training set until the test relative error of the ground-state energy falls below a predetermined threshold.
By this method, we obtain a training set consisting of 11 points. The corresponding results are shown in Fig. 3. Remarkably, even with a training set size that is only half of the previous set, the model still gives a reliable estimation of the ground-state energy. Moreover, when we visualize the training sets (see the orange dots in Fig. 3), we find them nearly equispaced except for the points near the phase transition point of the Hamiltonian (see the orange inverted triangles in Fig. 3). This observation roughly corresponds to an intuition that the ground-state wavefunction might experience a more dramatic change around the phase transition point which requires more training points to better capture.
Another remarkable advantage of NN-VQE is the training efficiency. As shown in Fig. 4 (a)(c), NN-VQE has a significant speedup in the optimization procedure compared with plain VQE. The energy cost function drops more rapidly, which offers great benefits for NISQ computers since fewer training epochs and thus fewer quantum resources are required. Such advantages benefit from the NN-PQC hybrid architecture. The neural network brings a more dramatic change in the PQC parameters at the beginning stage of the optimization process as shown in Fig. 4 (b)(d), which might also be relevant in mitigating the BP issue.
In order to demonstrate the effectiveness of the NN-VQE in estimating a multiparameter Hamiltonian, we extend our study to the two-parameter XXZ model. In this model, both the anisotropy parameter \(\Delta\) and the transverse field strength \(\lambda\) are tunable in the Hamiltonian in Eq. 4. The training set for \(\Delta\) consists of 10 equispaced points in the interval of \([-1.0,1.0]\), while for \(\lambda\) consists of 5 equispaced points in the interval of \([0.0,1.0]\). The ansatz used is the hardware-efficient ansatz [58] with two-qubit gates in the ladder layout of depth \(D\) (see the SM for details). The encoding neural network also shares a similar structure as the one-parameter case but now the input takes two values \(\Delta\) and \(\lambda\).
The numerical results are presented in Fig. 5. Remarkably, the NN-VQE, using a neural network with two inputs, yields excellent performance in estimating the ground state across different phases. This result implies the robustness and versatility of the NN-VQE in simulating complex quantum systems governed by a multiple-parameter Hamiltonian.
_Discussion. -_ In this Letter, we introduce the NN-VQA framework. More specifically, we first use a neural network to transform the Hamiltonian parameters to the optimized parameters in the PQC for VQA. We show the validity and effectiveness of the framework in solving the XXZ Hamiltonian ground state with different parameters through only one pre-training procedure without
Figure 5: Relative errors of ground-state energies for an \(n=12\) 1D XXZ spin chain with two tunable Hamiltonian parameters, using hardware-efficient ansatz with circuit depth \(D=1,2\). The red dots are the training set. The red lines are the exact phase transition line between the ferromagnetic phase (FM) and the XY phase [57].
Figure 4: Speedup in the optimization process of NN-VQE and the corresponding PQC parameter changes. (a)(c) The ground-state energy relative errors for an \(n=12\) XXZ spin chain when \(\Delta=1.5,2.0\) are shown with respect to epochs. Hardware-efficient ansatz with \(D=3\) is used. The red and blue lines correspond to the ground-state energy relative errors and standard deviation of NN-VQE and VQE respectively. (b)(d) The parameter differences when training VQE and NN-VQE in corresponding \(\Delta\). The difference is the sum of the absolute value of parameter differences between epochs. NN-VQE brings a more dramatic circuit parameter change at the beginning of the optimization process, which speedups the optimization process.
any problem instance specific fine-tuning. In order to further reduce the pre-training overhead, we also employ an active learning heuristic where the progressively built training set can be greatly reduced. We also find that the NN-VQE pipeline can speed up the training process.
In terms of the neural network part, we can introduce more physics-inspired neural network structures for multi-parameter Hamiltonian VQE problems. For example, considering the random Ising model where the couplings at each bond or site are different, we can abstract the Hamiltonian parameters as a graph where the node and edge weights describe the Hamiltonian form. In such cases, we believe a graph neural network (GNN) [59; 60] is more suitable for the encoding task as the symmetry and geometry can also be properly addressed in a well-designed GNN. And the power of considering local geometry as in the GNN approach is proven to be exponentially sample efficient in learning quantum state properties [61; 62; 63; 64].
Our framework envisions a future paradigm to utilize quantum computers. The encoding neural network can be pre-trained on high-quality quantum devices with a large time and measurement budget. The pre-trained model can be efficiently saved on classical computers and shared via the cloud. Since the NN-VQE can be targeted to a large family of quantum systems that can be connected via lots of parameters, a large pre-trained model could be of general interest for solving various problems. The end-users can download the large pre-trained classical model and extract the trained circuit parameters given the specific problem they are interested in solving. In this paradigm, the end users are free from training on quantum computers and can utilize the power of quantum computers more efficiently. It is also worth noting that at the training stage, due to the nature of multiple training points, it is very easy to utilize the data parallelism and pre-train the NN-VQE with many quantum computers.
_Acknowledgements:_ We gratefully thank Gaoxiang Ye for useful discussions.
|
2303.08869 | Probing Cosmological Particle Production and Pairwise Hotspots with Deep
Neural Networks | Particles with masses much larger than the inflationary Hubble scale, $H_I$,
can be pair-produced non-adiabatically during inflation. Due to their large
masses, the produced particles modify the curvature perturbation around their
locations. These localized perturbations eventually give rise to localized
signatures on the Cosmic Microwave Background (CMB), in particular, pairwise
hotspots (PHS). In this work, we show that Convolutional Neural Networks (CNN)
provide a powerful tool for identifying PHS on the CMB. While for a given
hotspot profile a traditional Matched Filter Analysis is known to be optimal, a
Neural Network learns to effectively detect the large variety of shapes that
can arise in realistic models of particle production. Considering an idealized
situation where the dominant background to the PHS signal comes from the
standard CMB fluctuations, we show that a CNN can isolate the PHS with
$\mathcal{O}(10)\%$ efficiency even if the hotspot temperature is
$\mathcal{O}(10)$ times smaller than the average CMB fluctuations. Overall, the
CNN search is sensitive to heavy particle masses $M_0/H_I=\mathcal{O}(200)$,
and constitutes one of the unique probes of very high energy particle physics. | Taegyun Kim, Jeong Han Kim, Soubhik Kumar, Adam Martin, Moritz Münchmeyer, Yuhsin Tsai | 2023-03-15T18:34:28Z | http://arxiv.org/abs/2303.08869v1 | # Probing Cosmological Particle Production and Pairwise Hotspots with Deep Neural Networks
###### Abstract
Particles with masses much larger than the inflationary Hubble scale, \(H_{I}\), can be pair-produced non-adiabatically during inflation. Due to their large masses, the produced particles modify the curvature perturbation around their locations. These localized perturbations eventually give rise to localized signatures on the Cosmic Microwave Background (CMB), in particular, pairwise hotspots (PHS). In this work, we show that Convolutional Neural Networks (CNN) provide a powerful tool for identifying PHS on the CMB. While for a given hotspot profile a traditional Matched Filter Analysis is known to be optimal, a Neural Network learns to effectively detect the large variety of shapes that can arise in realistic models of particle production. Considering an idealized situation where the dominant background to the PHS signal comes from the standard CMB fluctuations, we show that a CNN can isolate the PHS with \(\mathcal{O}(10)\%\) efficiency even if the hotspot temperature is \(\mathcal{O}(10)\) times smaller than the average CMB fluctuations. Overall, the CNN search is sensitive to heavy particle masses \(M_{0}/H_{I}=\mathcal{O}(200)\), and constitutes one of the unique probes of very high energy particle physics.
## 1 Introduction
An era of cosmic inflation [1; 2; 3] in the primordial Universe remains an attractive paradigm to explain the origin of (approximately) scale invariant, Gaussian, and adiabatic primordial perturbations, inferred through cosmic microwave background (CMB) and large scale structure (LSS) observations. This inflationary era can be characterized by a rapid expansion of spacetime, controlled by an approximately constant Hubble scale \(H_{I}\). Excitingly, based on the current constraints, \(H_{I}\) can be as large as \(5\times 10^{13}\) GeV [4]. This fact, coupled with the feature that particles with masses up to order \(H_{I}\) can get quantum mechanically produced during inflation, makes the inflationary era a natural and unique arena to _directly_ probe very high energy particle physics.
There are several classes of mechanisms through which heavy particles, which we label as \(\chi\), can be produced during inflation. When their mass \(m_{\chi}\lesssim H_{I}\), quantum fluctuations of the inflationary spacetime itself can efficiently produce the \(\chi\) particles. However, for \(m_{\chi}\gg H_{I}\)
this production gets suppressed exponentially as \(e^{-\pi m_{\chi}/H_{I}}\)[5], and other mechanisms are necessary for efficient particle production to occur.
To illustrate this, we consider the standard slow-roll inflationary paradigm containing an inflaton field \(\phi\) whose homogeneous component we denote by \(\phi_{0}(t)\). Normalization of the primordial scalar power spectrum requires the 'kinetic energy' of this homogeneous component to be \(|d\phi_{0}/dt|^{1/2}\approx 60H_{I}\)[4]. Therefore, heavy particles, if appropriately coupled to the inflaton kinetic term, can be efficiently produced for \(m_{\chi}\lesssim 60H_{I}\). One class of examples of this involve a coupling of the type \(\partial_{\mu}\phi J^{\mu}\) where \(J^{\mu}\) is a charged current made up of the \(\chi\) field. For some recent work implementing this idea see, e.g, Refs. [6; 7; 8; 9; 10; 11; 12; 13; 14]. In these constructions, heavy particle production happens continuously in time, in a scale-invariant fashion. In other words, the coupling of the inflaton to \(\chi\) particles does not break the shift symmetry, \(\phi\to\phi+\text{constant}\), of the inflaton.
A different class of mechanisms can lead to particle production at specific times during the inflationary evolution. This can happen if the shift symmetry of the inflaton is broken in a controlled manner, e.g. to a discrete shift symmetry. This breaking of shift symmetry translates into a violation of scale invariance, and selects out specific time instant(s) when particle production can occur. Examples of such mechanisms appear in Refs. [15; 16; 17; 18; 19; 20], and see Refs. [21; 22] for reviews.
A particularly interesting example of this latter mechanism arises in the context of ultra-heavy particles with time-dependent masses. More specifically, suppose \(m_{\chi}\) varies as a function of \(\phi\) in a way such that, as \(\phi\) passes through a specific point \(\phi_{*}\) on the inflaton potential at time \(t_{*}\), \(m_{\chi}(\phi)\) passes through a local minimum. In this case, non-adiabatic \(\chi\) particle production can occur at time \(t_{*}\). Following their production, \(\chi\) particles can again become heavy, \(m_{\chi}\gg|d\phi_{0}/dt|^{1/2}\), and owing to this large mass they can backreact on the inflationary spacetime, contributing to the curvature perturbation around their locations.
We can describe the effects of these additional curvature perturbations qualitatively in the following way, leaving the details for the next section. Following their production, the perturbations exit the horizon when their wavelengths become larger than \(1/H_{I}\) and become frozen in time. After the end of inflation, they eventually reenter the horizon and source additional under- or over-densities in the thermal plasma in the radiation dominated Universe. Overdense regions, for example, would trap more plasma, and therefore would emit more photons at the time of CMB decoupling.1 Therefore, we would observe localized regions on the sky where CMB would appear hotter than usual. As we will discuss below, the sizes of these localized'spots' are determined by the size of the comoving horizon, \(\eta_{*}\), at the time of particle production \(t_{*}\). While \(\eta_{*}\) can take any value, for concreteness we will consider \(\eta_{*}\sim 100\) Mpc in this work. This implies that the localized spots would subtend \(\sim 1^{\circ}\) on the CMB sky.
Footnote 1: To be more accurate, one also need to take into account the gravitational redshift of the photons as they climb out of the gravitational potential wells. We will compute this effect in the next section.
The next question one may ask is what is an efficient strategy to look for such signatures.
Since this scenario is associated with a violation of scale invariance, characterized by \(\eta_{*}\), one would expect to see 'features' on the CMB power spectrum or even higher-point correlation functions. However, in the regime we focus on, the total number of produced \(\chi\) particles is still small to the extent that the CMB power spectrum is minimally affected, as we explicitly check later. On the other hand, the spots can still be individually bright enough such that we can look for them directly in position space. Indeed, this class of signatures in the context of heavy particle production were discussed in Refs. [23; 24], and in Ref. [25] the associated CMB phenomenology was described and a simple 'cut-and-count' search strategy was developed. Using the cut-and-count strategy, Ref. [25] constrained the parameter space of ultra-heavy scalars and illustrated regions where a position space search is more powerful than power spectrum-based searches.
In more detail, Ref. [25] considered a single instance of particle production during the time when CMB-observable modes exit the horizon. Conservation of momentum implies that such heavy particles are produced in pairs. However, owing to their large mass, the particles do not drift significantly following their production, and it was argued that the separation between the two particles forming a pair can be taken to be a uniformly random number between \(0\) and \(\eta_{*}\). Finally, it was shown that the coupling \(g\) of \(\chi\) to the inflaton determines how hot/cold the associated spot on the CMB is with the heavy particle mass \(m_{\chi}\) determining the total number of such spots on the sky. To summarize, the three parameters determining the hot/cold spot phenomenology are \(\{g,m_{\chi},\eta_{*}\}\), as will be reviewed in more detail in the next section. While both cold or hot spots can arise depending on the value of \(\eta_{*}\), for the choices of \(\eta_{*}\) in this work, only hotspots will appear on the CMB. Therefore, we will often be referring to these localized spots as hotspots, in particular as pairwise hotspots (PHS) since the spots appear in pairs.
In the present work, we improve upon Ref. [25] in several important ways. First, in Ref. [25] we only considered hotspots that lie within the last scattering surface, with a thickness of \(\Delta\eta\approx 19\) Mpc [26]. In this work we adopt a more realistic setup and include hotspots that are distributed in a larger region around the last scattering surface. We take this region to have a thickness of \(2\eta_{*}\) and we show in Sec. 2 how hotspots lying outside the \(\Delta\eta\) shell can still affect the CMB. The overall signature of PHS then changes non-trivially. For instance, with the improved treatment we can have one spot of a pair lying on the CMB surface, while the other can lie off the CMB surface, leading to an asymmetric signal.
Second, we develop a neural network (NN)-based search for the hotspot profiles. In principle, a neural network is not necessary to search for a profile of known shape which is linearly added to the Gaussian background. In this case, the standard method of constructing a so-called matched filter can be shown to be the optimal statistic to detect the profile (see, e.g., [27]). Matched filter-based searches for radially symmetric profiles in the CMB have been previously reported for example in [28; 29; 30], with the physical motivation of searching for inflationary bubble collisions. Various matched filters have also been used in the Planck Anisotropy and Statistics Analysis [31; 32] without finding a significant excess. However, the signal which we are looking for here is more complicated. Profiles come in pairs (breaking
radial symmetry of the profile), they can be overlapping, and, depending on their production time and orientation with respect to the surface of last scattering, their appearance on the CMB changes. While it is in principle possible to cover the entire space of profiles with a very large bank of matched filters, this would be a complicated and computationally challenging approach. A neural network, on the other hand, can learn an effective representation of these filters which interpolates well between all profile shapes, including overlapping ones. We also implement the matched filter method below, and show that in the simplified case with a single profile type, our neural network performs similar to the optimal matched filter.
This work is organized as follows. We first describe a simple model of \(\chi\) particle production in Sec. 2 and summarize how the total number of produced particles depends on the model parameters along with various properties of the PHS. We improve the calculation of the hotspot profiles by taking into account the line-of-sight distance to the location of the hotspots which can be off the CMB surface. In Sec. 3, we describe the simulation of the PHS signals and the CMB maps in angular space, assuming that the dominant background to the PHS signal comes from the standard CMB fluctuations. In Sec. 4, we describe the convolutional neural network (CNN) analysis and estimate the sensitivity the CNN can achieve for a PHS search. We then translate this sensitivity to the mass-coupling parameter space of the heavy particles. We also compare the CNN analysis with a matched filter analysis for simplified hotspot configurations. We conclude in Sec. 5.
## 2 Pairwise Hotspot Signals
To model heavy particle production, we consider a scenario where the mass of \(\chi\) is inflaton-dependent, \(m_{\chi}(\phi)\). Therefore as \(\phi\) moves along its potential, efficient, non-adiabatic particle production can occur if \(m_{\chi}(\phi)\) varies with \(\phi\) rapidly. With a mass term \(m_{\chi}(\phi)^{2}\chi^{2}\), _pairs_ of \(\chi\) particles would be produced, as required by three-momenta conservation. The phenomenology of such heavy particles depend on their mass, coupling to the inflaton, and the horizon size at the time of their production. We now review these properties more qualitatively, referring to Ref. [25] for a more complete discussion.
### Inflationary Particle Production
We parametrize the inflationary spacetime metric as,
\[ds^{2}=-dt^{2}+a^{2}(t)d\vec{x}^{2}, \tag{1}\]
with the scale factor \(a(t)=e^{H_{I}t}\) and \(H_{I}\) the Hubble scale during inflation that we take to be (approximately) constant. To model particle production in a simple way, we assume \(m_{\chi}(\phi)\) passes through a minimum as \(\phi\) crosses a field value \(\phi_{*}\). Then we can expand \(m_{\chi}(\phi)\) near \(\phi_{*}\) as,
\[m_{\chi}(\phi)=m_{\chi}(\phi_{*})+\frac{1}{2}m_{\chi}^{\prime \prime}(\phi_{*})(\phi-\phi_{*})^{2}+\cdots, \tag{2}\]
where primes denote derivatives with respect to \(\phi\). Thus the mass term would appear in the potential as,
\[m_{\chi}(\phi)^{2}\chi^{2}=m_{\chi}(\phi_{*})^{2}\chi^{2}+m_{\chi}( \phi_{*})m_{\chi}^{\prime\prime}(\phi_{*})(\phi-\phi_{*})^{2}\chi^{2}+\cdots. \tag{3}\]
While away from \(\phi_{*}\), \(m_{\chi}(\phi)\) can vary in different ways, most of the important features of particle production are determined by the behavior of \(m_{\chi}(\phi)\) around \(\phi_{*}\). For example, the number density of \(\chi\) particles is determined by \(m_{\chi}(\phi_{*})\), as we will see below. Similarly, the spatial profiles of the hotspots on the CMB is determined by the dependence \((\phi-\phi_{*})^{2}\sim\dot{\phi}_{0}^{2}(t-t_{*})^{2}\sim(\dot{\phi}_{0}/H_{I} )^{2}\log(\eta/\eta_{*})^{2}\), where we have used the relation between \(t\) and conformal time \(\eta\), \(\eta=(-1/H_{I})e^{-H_{I}t}\) (an overdot here denotes a derivative with respect to time). Given the importance of the physics around \(\phi_{*}\), we will denote, \(m_{\chi}(\phi_{*})^{2}\equiv M_{0}^{2}\), \(m_{\chi}(\phi_{*})m_{\chi}^{\prime\prime}(\phi_{*})\equiv g^{2}\), and \(\phi_{*}\equiv\mu/g\), to describe particle production. Thus we will write the Lagrangian for \(\chi\) as,
\[\mathcal{L}_{\chi}=-\frac{1}{2}(\partial_{\mu}\chi)^{2}-\frac{1}{ 2}\left((g\phi-\mu)^{2}+M_{0}^{2}\right)\chi^{2}. \tag{4}\]
As \(\phi\) nears the field value \(\phi_{*}\), the mass of the \(\chi\) field changes non-adiabatically and particle production can occur.
The efficiency of particle production depends on the parameters \(g\), \(M_{0}\), and \(\eta_{*}\), the size of the comoving horizon at the time of particle production. This can be computed using the standard Bogoliubov approach, and resulting probability of particle production is given by [33; 20],
\[|\beta|^{2}=\exp\left(-\frac{\pi(M_{0}^{2}-2H_{I}^{2}+k^{2}\eta_ {*}^{2}H_{I}^{2})}{g|\dot{\phi}_{0}|}\right). \tag{5}\]
The normalization of the scalar primordial power spectrum, in the context of single-field slow-roll inflation, fixes \(A_{s}=H_{I}^{4}/(4\pi^{2}\dot{\phi}_{0}^{2})\approx 2.1\times 10^{-9}\)[4] which determines \(\dot{\phi}_{0}\approx(58.9H_{I})^{2}\).
The above expression (5) characterizes the probability of particle production with physical momentum \(k_{p}=k\eta_{*}H_{I}\). The total number density of particles can then be computed by integrating over all such \(k\)-modes,
\[n=\frac{1}{2\pi^{2}}\int_{0}^{\infty}dk_{p}k_{p}^{2}e^{-\pi k_{p }^{2}/(g|\dot{\phi}_{0}|)}e^{-\pi(M_{0}^{2}-2H_{I}^{2})/(g|\dot{\phi}_{0}|)}= \frac{1}{8\pi^{3}}\left(g\dot{\phi}_{0}\right)^{3/2}e^{-\pi(M_{0}^{2}-2H_{I}^ {2})/(g|\dot{\phi}_{0}|)}. \tag{6}\]
From an observational perspective, it is more convenient to relate \(n\) to the total number of spots that would be visible on the CMB sky. To that end, we need to specify the associated spacetime volume. Considering a shell of thickness \(\Delta\eta_{s}\) around the CMB surface, the total number of spots in that shell is given by [25],
\[N_{\rm spots} =n\times\left(\frac{a_{*}}{a_{0}}\right)^{3}\times 4\pi\chi_{\rm rec }^{2}\Delta\eta_{s}\,,\] \[=\frac{1}{2\pi^{2}}\left(\frac{g\dot{\phi}_{0}}{H_{I}^{2}}\right)^ {3/2}\frac{\Delta\eta_{s}}{\chi_{\rm rec}}(k_{*}\chi_{\rm rec})^{3}e^{-\pi(M_{ 0}^{2}-2H_{I}^{2})/(g|\dot{\phi}_{0}|)}\,, \tag{7}\] \[\approx 4\times 10^{8}\times g^{3/2}\left(\frac{\Delta\eta_{s}}{1 00~{}{\rm Mpc}}\right)\left(\frac{100~{}{\rm Mpc}}{\eta_{*}}\right)^{3}e^{- \pi(M_{0}^{2}-2H_{I}^{2})/(g|\dot{\phi}_{0}|)}\,.\]
Here \(a_{*}\) and \(a_{0}=1\) are the scale factors at the time of particle production and today, respectively. The quantity \(\chi_{\rm rec}\) is the distance of the CMB surface from us and approximately equals 13871 Mpc, obtained from Planck's best-fit \(\Lambda\)CDM parameters, and \(k_{*}=a_{*}H_{I}=1/\eta_{*}\) is the mode that exits the horizon at the time of particle production.
### Effect on the CMB
We now discuss the detailed properties of the spots and how they modify the CMB.
Primordial Curvature Perturbation from Heavy Particles.Owing to their large mass, the heavy particles can backreact on the spacetime metric around their locations, and can give rise to non-trivial curvature perturbations. The profile of such a curvature perturbation can be computed using the in-in formalism and the result is given by [24],
\[\langle\zeta_{\rm HS}(r)\rangle=\frac{H_{I}}{8\epsilon\pi M_{\rm pl }^{2}}\begin{cases}M(\eta=-r),&\text{if }r\leq\eta_{*}\\ 0,&\text{if }r>\eta_{*}\end{cases}. \tag{8}\]
Here \(\epsilon=|\dot{H}_{I}|/H_{I}^{2}\) is a slow-roll parameter, and we have anticipated that this curvature perturbation would give rise to a hotspot (HS), rather than a coldspot. Importantly, the variation of the mass as a function of conformal time \(\eta\) controls the spatial profile. This variation can be computed from Eq. (4) by noting the slow-roll equation \(\phi-\phi_{*}\approx\dot{\phi}_{0}(t-t_{*})\), which gives
\[M(\eta)^{2}=\frac{g^{2}\dot{\phi}_{0}^{2}}{H_{I}^{2}}\ln(\eta/ \eta_{*})^{2}+M_{0}^{2}. \tag{9}\]
Here we have used the relation between cosmic time \(t\) and the conformal time \(\eta\), that also determines the size of the comoving horizon, \(t-t_{*}=-(1/H_{I})\ln{(\eta/\eta_{*})}\).
Using the slow-roll relation \(\dot{\phi}_{0}^{2}=2\epsilon H_{I}^{2}M_{\rm pl}^{2}\) and the fact that \(M_{0}^{2}\sim g|\dot{\phi}_{0}|\) so that \(N_{\rm spots}\) is not significantly exponentially suppressed (see Eq. (7)), we can drop the contribution of the second term in Eq. (9) away from \(\eta_{*}\). The profile can then be simply written as,
\[\langle\zeta_{\rm HS}(r)\rangle=\frac{gH^{2}}{4\pi|\dot{\phi}_{0 }|}\ln(\eta_{*}/r)\theta(\eta_{*}-r). \tag{10}\]
Given the typical size of a standard quantum mechanical fluctuation \(\langle\zeta_{q}^{2}\rangle^{1/2}\sim H^{2}/(2\pi\dot{\phi}_{0})\), we see the curvature perturbation associated with a hotspot differs primarily by \(g/2\). In this work we will choose \(g\sim\mathcal{O}(1)\), so the two types of perturbations will be of the same order of magnitude.
CMB Anisotropy.After these fluctuation modes reenter the horizon, they source temperature anisotropies and give rise to localized spots on the CMB sky. To compute the resulting anisotropies, we first write metric perturbations,
\[ds^{2}=-(1+2\Psi)dt^{2}+a^{2}(t)(1+2\Phi)\delta_{ij}dx^{i}dx^{j}, \tag{11}\]
in the Newtonian gauge. The temperature fluctuations of the CMB corresponding to Fourier mode \(\vec{k}\), pointing to direction \(\hat{n}\) in the sky is given by,
\[\Theta(\vec{k},\hat{n},\eta_{0})=\sum_{l}i^{l}(2l+1)\mathcal{P}_{l}(\hat{k}\cdot \hat{n})\Theta_{l}(k,\eta_{0}). \tag{12}\]
Here the multipole \(\Theta_{l}(k,\eta_{0})\) depends on the primordial perturbation \(\zeta(\vec{k})\) and a transfer function \(T_{l}(k)\) as,
\[\Theta_{l}(k,\eta_{0})=T_{l}(k)\zeta(\vec{k}), \tag{13}\]
with \(\eta_{0}\) denoting the conformal age of the Universe today. Importantly, for our scenario \(T_{l}(k)\) itself can be computed exactly as in the standard \(\Lambda\)CDM cosmology. It can be computed after taking into account the Sachs-Wolfe (SW), the Integrated Sachs-Wolfe (ISW), and the Doppler (Dopp) effect [34],
\[\begin{split}\Theta_{l}(k,\eta_{0})&\simeq\left( \Theta_{0}(k,\eta_{\rm rec})+\Psi(k,\eta_{\rm rec})\right)j_{l}(k(\eta_{0}-\eta _{\rm rec}))\\ &+\int_{0}^{\eta_{0}}d\eta e^{-\tau}\left(\Psi^{\prime}(k,\eta)- \Phi^{\prime}(k,\eta)\right)j_{l}(k(\eta_{0}-\eta))\\ &+3\Theta_{1}(k,\eta_{\rm rec})\left(j_{l-1}(k(\eta_{0}-\eta_{ \rm rec}))-(l+1)\frac{j_{l}(k(\eta_{0}-\eta_{\rm rec}))}{k(\eta_{0}-\eta_{\rm rec })}\right)\\ &\equiv\left(f_{\rm SW}(k,l,\eta_{0})+f_{\rm ISW}(k,l,\eta_{0})+ f_{\rm Dopp}(k,l,\eta_{0})\right)\zeta(\vec{k})\,,\end{split} \tag{14}\]
where \(\tau\) is the optical depth. The above expression relates a primordial perturbations \(\zeta\) to a temperature anisotropy \(\Theta_{l}\).
Temperature Anisotropy due to Heavy Particles.Regardless of the origin of \(\zeta(\vec{k})\) is, we can compute \(f_{\rm SW}(k,l,\eta_{0})\), \(f_{\rm ISW}(k,l,\eta_{0})\), and \(f_{\rm Dopp}(k,l,\eta_{0})\) as in the standard \(\Lambda\)CDM cosmology. Thus converting the position space profile in Eq. (10) to momentum space and using Eq. (14), we can get the observed profile of a hotspot on the CMB sky. This Fourier transform of the profile (10) can be written as,
\[\langle\zeta_{\rm HS}(\vec{k})\rangle=e^{-i\vec{k}\cdot\vec{x}_{\rm HS}}\frac {f(k\eta_{*})}{k^{3}}, \tag{15}\]
with a profile function
\[f(x)=\frac{gH^{2}}{\dot{\phi}_{0}}({\rm Si}(x)-\sin(x)),\quad{\rm Si}(x)=\int _{0}^{x}dt\sin(t)/t. \tag{16}\]
We parametrize the distance to the hotspot as,
\[\vec{x}_{0}-\vec{x}_{\rm HS}=-(\eta_{0}-\eta_{\rm HS})\hat{n}_{\rm HS}. \tag{17}\]
Here \(\vec{x}_{0}\) and \(\vec{x}_{\rm HS}\) parametrize our and the hotspot locations, respectively, and \(\hat{n}_{\rm HS}\) points to the direction of the hotspot. The quantity \(\eta_{\rm HS}\) denotes the location of the hotspot in
conformal time with \(\eta_{0}\) being the size of the present epoch. In the earlier paper, we took the hotspot to be on the CMB surface and hence set \(\eta_{\rm HS}=\eta_{\rm rec}\approx 280\) Mpc. In this work, we allow the hotspots to be away from the last scattering surface with \(\eta_{\rm HS}\) between \(\eta_{\rm rec}-\eta_{*}\) and \(\eta_{\rm rec}+\eta_{*}\), and study their signals on the CMB surface. This set up is summarized in Fig. 1.
As derived earlier, the temperature due to the hotspot is given by (dropping \(\eta_{0}\) from the argument),
\[\Theta(\vec{x}_{0},\hat{n})=\int\frac{d^{3}\vec{k}}{(2\pi)^{3}}e^ {i\vec{k}\cdot(\vec{x}_{0}-\vec{x}_{\rm HS})}\sum_{l}i^{l}(2l+1)\mathcal{P}_{ l}(\hat{k}\cdot\hat{n})\left(f_{\rm SW}(k,l)+f_{\rm ISW}(k,l)+f_{\rm Dop}(k,l) \right)\frac{f(k\eta_{*})}{k^{3}}. \tag{18}\]
Here \(\hat{n}\) denotes the direction of observation. The functions \(f_{\rm SW}(k,l)\) and \(f_{\rm ISW}(k,l)\) are extracted from the transfer function using CLASS[35; 36] as in Ref. [25]. Using the plane wave expansion,
\[e^{-i\vec{k}\cdot\vec{r}}=\sum_{\ell=0}^{\infty}(-i)^{l}(2l+1)j_ {l}(kr)\mathcal{P}_{l}(\hat{k}\cdot\hat{r}), \tag{19}\]
and the relation
\[\mathcal{P}_{l}(\hat{k}\cdot\hat{n})=\frac{4\pi}{(2l+1)}\sum_{m=-l }^{l}Y_{lm}(\hat{n})Y_{lm}^{*}(\hat{k}), \tag{20}\]
Figure 1: Representation of a hotspot on the CMB sky. Our location and the location of a hotspot are denoted as \(\vec{x}_{0}\) and \(\vec{x}_{\rm HS}\), respectively, defined with respect to an arbitrary coordinate system. The black circle denotes the surface of last scattering, located at \(\eta_{\rm rec}\approx 280\) Mpc in conformal coordinates. Due to momentum conservation, heavy particles are produced in pairs, and the distance between the two members of a pair can vary between 0 and \(\eta_{*}\). Therefore, in our analysis we allow the two members to be anywhere within the gray shaded region. We compute the temperature profile of a hotspot as a function of direction of observation \(\hat{n}\), with the hotspot center in the direction of \(\hat{n}_{\rm HS}\).
we get:
\[\Theta(\vec{x}_{0},\hat{n},\eta_{\rm HS}) = \frac{1}{2\pi^{2}}\int_{0}^{\infty}\frac{dk}{k}\sum_{l}j_{l}(k(\eta _{0}-\eta_{\rm HS}))(2l+1)\mathcal{P}_{l}(\hat{n}\cdot\hat{n}_{\rm HS})T_{\rm sum }(k,l)f(k\eta_{*}) \tag{21}\] \[T_{\rm sum}(k,l) \equiv f_{\rm SW}(k,l)+f_{\rm ISW}(k,l)+f_{\rm Dopp}(k,l)\,. \tag{22}\]
Note \(\Theta(\vec{x}_{0},\hat{n},\eta_{\rm HS})\) depends on \(\eta_{\rm HS}\), the location of the hotspot - which need not be on the last scattering surface as mentioned above. Given the spherically symmetric profile of the hotspot, the Doppler contribution to \(\Theta(\vec{x}_{0},\hat{n},\eta_{\rm HS})\) is small, from now on we only include the SW and ISW corrections for our analysis.
Central Temperature.It is useful to compute the temperature anisotropy at the central part of a hotspot. To that end, we set \(\hat{n}=\hat{n}_{\rm HS}\), implying \(\mathcal{P}_{l}(\hat{n}\cdot\hat{n}_{\rm HS})=1\), and
\[\Theta_{\rm central}(\vec{x}_{0},\eta_{\rm HS})=\frac{1}{2\pi^{2}}\int_{0}^ {\infty}\frac{dk}{k}\sum_{l}j_{l}(k(\eta_{0}-\eta_{\rm HS}))(2l+1)T_{\rm sum }(k,l)f(k\eta_{*}). \tag{23}\]
In Fig. 2 we show the SW and ISW contributions to the central temperature as a function of \(\eta_{\rm HS}\) after multiplying by the average CMB temperature \(T_{0}=2.7\) K for \(\eta_{*}=160\) Mpc. For completeness, we also show the central temperature in Fig. 3, as obtained in [25], as a function of hotspot size \(\eta_{*}\), assuming the the hotspot is located on the surface of last scattering. As we can see, the pair-produced CMB spots are indeed _hot_spots when \(\eta_{*}\lesssim\) Gpc. For \(\eta_{*}>6600\) Mpc _cold_spots as opposed to _hot_spots arise. This is because the negative SW contribution dominates the positive ISW contribution, with the combination being negative.
## 3 Simulation of the CMB and PHS Signals
In order to design a PHS search, we simulate the PHS signal and CMB maps so that we can estimate the signal capture rate ('True Positive Rate'), and the background count for a CNN analysis. We notice that there are three types of backgrounds to consider for a PHS search: (i) the noise of the CMB detector, (ii) the astrophysical foreground, and (iii) the background from the standard primordial fluctuations.
A realistic analysis needs to take into account detector noise and foregrounds. In our analysis, we consider profiles on relatively large angular scales, \(\ell<1000\). For these scales current CMB temperature data, such as from Planck, is signal-dominated and we thus do not need to add instrumental noise to our simulations. The astrophysical foreground comes from compact objects such as galaxies, galaxy clusters, gas, and dust which can also produce localized signals. Part of these astrophysical foregrounds can be cleaned out due to their frequency dependence (for a review see, e.g., Ref. [37]). For the signal sizes that we consider, corresponding to \(\ell<1000\), we do not expect significant astrophysical contamination after foreground cleaning and masking of the galactic plane, while for significantly smaller scales a detailed study of residual foregrounds and point sources would be required (see, e.g.,
Planck's component separation analysis [38]). In the following, we therefore only consider the background from the primordial, almost Gaussian, fluctuations when studying the PHS signal. This last type of background is 'irreducible' in the sense that it will always be present, originating from the fluctuations of the inflaton itself. We will assume the CMB maps are masked to reduce the astrophysical foregrounds and badly-conditioned pixels and retain only \(60\%\) of the sky for the analysis. The number is similar to the sky fraction used in the Planck analysis [39].
Unlike the analysis in [25] that was based on a HEALPix[40] simulation, in this work, we use the QuickLens package2 to simulate the CMB maps. QuickLens allows us to work in the 'flat sky approximation', neglecting sky curvature that is irrelevant to the size of the PHS profile we consider, as well as to draw sample maps with periodic boundary conditions to avoid complications due to masking. QuickLens can take a theoretical temperature power spectrum to produce mock flat sky CMB maps. To provide an initial input, we use the CLASS (v3.2) package [35; 36] to compute a temperature anisotropy spectrum \(C_{\ell}^{\rm TT}\) based on the Planck 2018 [41] best fit \(\Lambda\)CDM parameters,
Footnote 2: [https://github.com/dhanson/quicklens](https://github.com/dhanson/quicklens)
\[\{\omega_{\rm cdm},\omega_{b},h,10^{9}A_{s},n_{s},\tau_{\rm reio}\}=\{0.120, 0.022,0.678,2.10,0.966,0.0543\}\,. \tag{10}\]
We will comment on the sensitivity of the CNN analysis to the \(\Lambda\)CDM parameters in Sec. 4.1 and Appendix A. We specify \(\ell_{\rm max}=3500\) in the code for the maximum number of \(\ell\)-modes
Figure 2: Central temperature \(\Theta_{\rm central}\times T_{0}\) of a hotspot as a function of the (radial) location of the hotspot. We choose \(\eta_{*}=160\) Mpc and \(g=1\). The dotted gray line indicates the location of the recombination surface. Larger (smaller) \(\eta_{\rm HS}\) implies the hotspots are closer to (further from) us. We also show contribution of the Sachs-Wolfe term (orange) and the Integrated Sachs-Wolfe term (purple) in determining the total temperature (olive). The left and right edges of the plot are at \(\eta_{\rm HS}=\eta_{\rm rec}-\eta_{*}\) and \(\eta_{\rm HS}=\eta_{\rm rec}+\eta_{*}\), respectively.
used for the image generation. As explained above, our signal profiles have support on length scales corresponding to an \(\ell<1000\), where instrumental noise is negligible compared to the primary background from CMB and can thus be ignored. An application to significantly smaller angular scales would need to take into account the noise properties of the experiment. We choose the image resolution such that \(1\ \mathrm{pixel}=10^{-3}\) radians to match Planck's angular resolution down to \(\approx 5\) arc minutes [42]. We also use the relation between the angle and the comoving length on the last scattering surface \(\Delta\eta/\chi_{\mathrm{rec}}\).3 For instance, if the separation between two hotspot centers is \(160\ \mathrm{Mpc}\) on the last scattering surface, the two centers are \(12\) pixels away on the image, with \(\chi_{\mathrm{rec}}=13871\ \mathrm{Mpc}\) for Planck's best-fit \(\Lambda\)CDM parameters.
Footnote 3: In Ref. [25], the angular size of one pixel was obtained by matching the pixel number to the total degrees of freedom in the \(\ell\)-modes (\(\ell_{\mathrm{max}}^{2}+\ell_{\mathrm{max}}=4\pi/\theta_{\mathrm{pixel}}^{2}\)), together with the approximation \(\ell_{\mathrm{max}}\simeq\eta_{0}/\eta_{\mathrm{pixel}}\). Although the matching reproduces the same angular resolution, the relation between \(\ell_{\mathrm{max}}\) and \(\eta_{\mathrm{pixel}}\) gives \(\Delta\theta=\sqrt{4\pi}\Delta\eta/\chi_{\mathrm{rec}}\). Since \(\ell_{\mathrm{max}}\simeq\eta_{0}/\eta_{\mathrm{pixel}}\) comes from the approximation of the \(k\)-mode integral with \(j_{\ell}(k\,\chi_{\mathrm{rec}})\) and \(k=2\pi/\eta\), the relation between the angle and length is less robust than \(\Delta\theta=\Delta\eta/\chi_{\mathrm{rec}}\).
For the CNN analysis, we begin by generating \(360^{2}\) pixel images, corresponding to a \([-10.32^{\circ},10.32^{\circ}]\) region in longitude and latitude (\(n_{x}=360\) in QuickLens ). We then cut out a \(90^{2}\) patch from each of the \(360^{2}\)-sized maps. These non-periodic, smaller maps are then used for further analysis. In particular, for our CNN analysis, we generate \(160\)k training images, \(40\)k validation \(90^{2}\) pixel images, and an additional \(5\)k test images to quantify the
Figure 3: Central temperature (green) of a hotspot originating from a heavy particle for \(g=1\), based on Eq. (23) with \(\eta_{\mathrm{HS}}=\eta_{\mathrm{rec}}\). The green line illustrates the variation of the observed anisotropy as a function of the “size” of the hotspot, determined by the comoving horizon \(\eta_{*}\) at the time of particle production. The horizontal gray line gives a rough benchmark of the magnitude of the large-scale temperature anisotropy due to only the standard quantum fluctuations of the inflaton \((1/5)\langle\zeta_{q}^{2}\rangle\), without taking into account acoustic oscillations. The dashed vertical gray lines show the benchmark choices for the hotspot size \(\eta_{*}=50\,,100\,,160\ \mathrm{Mpc}\) chosen in the subsequent discussion. We take the plot from Ref. [25].
network performance. Training the neural network on smaller patches yields better training convergence and does not lead to loss of information as long as the characteristic size of the signal is smaller than the size of the patch.
The profile of each of the PHS is described by Eq. (21), where the function depends on the distance to the hotspots (\(\eta_{0}-\eta_{\rm HS}\)) and the angle \(\cos^{-1}(\hat{n}\cdot\hat{n}_{\rm HS})\), as defined in Fig. 1. The overall magnitude of the signal temperature is proportional to the coupling \(g\). When generating the signal, we require both the hotspots to be within a shell \(\pm\eta_{*}\) around the last scattering surface as shown in Fig. 1. For example, when studying the case with \(\eta_{*}=160\) Mpc, we first divide the \(\pm 160\) Mpc region into 50 concentric annuli, each having equal thickness. We then choose the first hotspot from a pair to lie on any of these 50 annuli with equal probability. The second member is then chosen anywhere within a sphere of radius \(\eta_{*}\) centered on the first
Figure 4: Radial profile of a single hotspot with the heavy particle position inside (olive), on (orange), and outside (purple) of the last scattering surface. The locations of these hotspots in conformal time are taken to be \(\eta_{\rm rec}+\eta_{*}\), \(\eta_{\rm rec}\), and \(\eta_{\rm rec}-\eta_{*}\), respectively, as denoted by the labels. From upper left to bottom: horizon size for the hotspot production at \(\eta_{*}=50,100,160\) Mpc. The plots assume the inflaton-\(\chi\) coupling \(g=1\).
hotspot, again with a uniform random distribution.4 A pair is kept for further analysis only if both the spots of the pair falls within the \(\pm\eta_{*}\) shell of the last scattering surface. Since
Figure 5: Example plots of pure background from QuickLens simulation (left), pure signals (middle), and signals with \(g=4\) on top of the simulated background (right). The scalar particles are produced at comoving horizon sizes \(\eta_{*}=50\) Mpc (top), 100 Mpc (middle), \(\eta_{*}=160\) Mpc (bottom). The signals at different benchmark \(\eta_{*}\) have roughly the same size, as the \(\eta_{*}\) dependence only enters logarithmically. The two hot spots are clearly separated for \(\eta_{*}=160\) Mpc and \(\eta_{*}=100\) Mpc, while for \(\eta_{*}=50\) Mpc they overlap.
the distribution in a 3D volume allows hotspots to orient along the line-of-sight direction, the average separation between the two hotspots projected on the last scattering surface is smaller than the separation assumed in Ref. [25] that only considered PHS on the last scattering surface.
Once we generate PHS images with random orientation and separation between two hotspots, we pixelate them and add the PHS image to the simulated CMB maps to produce the signal image. We follow this procedure for all the signal images in our study. In this work, we study benchmark models with horizon sizes
\[\eta_{*}=50,\,100,\,160\,\,\mathrm{Mpc}\,, \tag{3.2}\]
and couplings from \(g=1\) to \(4\). Specifying \(g\) and \(\eta_{*}\) sets the overall temperature and the profile of the hotspot, a la Eq. (2.21). Within the approximations we've made in Sec. 2, the remaining model parameter, \(M_{0}\), only affects the overall number of hotspots \(N_{\mathrm{PHS}}\) (through Eq. (2.7)). Going forward, we will compute the number of hotspots that can be hidden within the background fluctuations for given benchmark coupling and \(\eta_{*}\). Then, using Eq. (2.7), the upper bounds on \(N_{\mathrm{PHS}}\) can be translated into lower bounds on \(M_{0}\). As an illustration of what a benchmark PHS looks like, in Fig. 5 we show examples of the CMB background (left), PHS signal (middle), and the signal plus background (right) for \(g=4\) with different choices of \(\eta_{*}\). Note that it is difficult to identify the signals by eye in the plots on the right, even with such a large coupling.
Compared to Ref. [25], the benchmark \(\eta_{*}\) values are identical, but we choose smaller values of the coupling \(g\). This is because we find the CNN analysis is much more powerful than the 'cut and count' method adopted in Ref. [25], and therefore capable of identifying fainter hotspots. We chose the benchmark \(\eta_{*}\) values to test out a variety of different PHS; \(\eta_{*}=160\,\mathrm{Mpc}\) hotspots have a very high central temperature (Fig. 3), while \(\eta_{*}=50\,\mathrm{Mpc}\) hotspots are significantly cooler and have smaller inter-spot separation. The choice \(\eta_{*}=100\,\mathrm{Mpc}\) sits between these for comparison.
## 4 Identifying Pairwise Hotspots with CNN
In this section we describe the training process for the CNN using \(90^{2}\) pixel images, and discuss some qualitative properties of the training result. We then apply the trained network to a larger sky map and present results on the upper bound on the number of PHS for given values of \(\eta_{*}\) and \(g\). We end the section with some comparisons between the CNN and a matched filter analysis.
### Network Training on Small Sky Patches
CNNs are one of the most commonly used deep neural networks specialized for image recognition [44; 45]. In this study, we build the network using PyTorch [46] with the structure shown in Fig. 6. The network takes a CMB or CMB+PHS image as an input and outputs a single value between \(0\) and \(1\), which can be interpreted as the probability of the input image
Figure 6: A schematic architecture of the CNN used in this work. We applied two convolutional layers in series; first, 8 kernels with size of \(16\times 16\) and stride of 2 are applied, then, 8 independent kernels size of \(8\times 8\) yields feature map of 40. Next, we apply a max-pooling using the kernel and stride size of \(2\times 2\), which subsequently reduces the image dimension down to \(20\times 20\times 8\). Processed images get further reduced by going through 2D convolution and max-pooling, further reducing the size of the image to \(5\times 5\times 8\). After 4 sets of total convolution followed by average pooling, the final feature maps are flattened to feed into fully connected network form, and the final network ends with single output value which sits between 0 and 1. Throughout the network, we use the rectified linear unit (ReLU) function [43] to introduce non-linearity, except for the output layer which has a sigmoid activation function suitable for the binary classification.
Figure 7: Comparison between true and the CNN feature maps with and without implanted signals. The left plots show the PHS signal and signal plus the CMB background. The middle and right plots show feature maps after going through three convolutional layers. The enhanced signal locations on the feature maps on the right align with the true location of the hotspots after rescaling the pixel coordinates with respect to the relative size between the 3rd layer (\(20^{2}\)-pixels) and the original image (\(90^{2}\)-pixels). Here we take \(\eta_{*}=160\) Mpc, \(g=4\), and \(\eta_{\rm HS}=\eta_{\rm rec}\) for both the spots.
containing the PHS. We train the network on 160k images (see Sec. 3), half of which contain a single pairwise hotspot profile on top of the CMB and the rest are CMB-only images. For optimization, we use a binary cross entropy loss function, commonly used for binary classification, along with Adam optimizer [47] and \(10^{-4}\) learning rate.
We train the network using PHS signals with \(g=3\) for all the three values of \(\eta_{*}\) individually. One may wonder how well a network trained on one \(g\) value will generalize to different values without retraining. As the CNN (unlike the matched filter discussed below) is nonlinear, extrapolation to values of \(g\) other than what was used for training is not guaranteed to be optimal. On the other hand, training a CNN for each possible benchmark input is time- and resource-intense. Empirically, we find that the network trained at \(g=3\) works well over a wide range of \(g\) values, perhaps because the network learns to analyze the shape rather than the amplitude of the profile. In a fully optimal analysis one would want to retrain the neural network over a grid of \(g\) values.
To get some idea for how the CNN discriminates between signal and background images, we show the feature maps from the first three convolutions in Fig. 7 for \(\eta_{*}=160\,\mathrm{Mpc}\) and \(g=4\). As we can see proceeding from left to right, the trained network does amplify the signal region compared to the background-only image, and the convolutional layers can emphasize the correct locations of each spot in the feature map.
To quantify the performance of the CNN, we generate a test sample of 5k CMB-only maps and 5k CMB+PHS maps, each having \(90^{2}\) pixels. For a CMB+PHS map, we inject one randomly oriented and located PHS in the CMB map. The PHS signal occupies \(\mathcal{O}(50^{2})\) pixels in the examples that we study, and thus the \(90^{2}\)-pixels image is only slightly larger than the signal. When an image has network output \(>0.5\), we count it as an identified signal map. We call the signal capture rate (True Positive Rate, \(\epsilon_{S,90^{2}}\)) as the fraction of CMB+PHS images being correctly identified as signal maps, and define the fake rate (False Positive Rate, \(\epsilon_{B,90^{2}}\)) as the fraction of CMB-only images being wrongly identified as signal maps,5
Footnote 5: In the actual search, there can be more than one PHS in a \(90^{2}\)-pixels region, and the CNN would still count the region to be one signal map. We verify that the signal capture rate would increase if there are more PHS in the image. When we study the sensitivity of the CNN search, having additional PHS around the same location will help the search, and this makes our analysis based on having one PHS in a \(90^{2}\)-pixels image to be conservative. Moreover, given that the CNN search can probe PHS with a small number of signals on the CMB sky, the probability of having additional PHS around the same location is small. Therefore, counting the number of \(90^{2}\)-pixels regions should give a good approximation of the PHS in the analysis.
\[\epsilon_{S,90^{2}} = \frac{\text{number of signal-injected images with CNN output }>0.5}{\text{total number of signal-injected images}},\] \[\epsilon_{B,90^{2}} = \frac{\text{number of background-only images with CNN output }>0.5}{\text{total number of background-only images}}. \tag{10}\]
In Fig. 8, we show the network output for the 5k images with and without injecting the PHS signal. In the left column we show the result when the PHS are uniformly distributed within a shell of \(\eta_{\text{rec}}\pm\eta_{*}\) around the surface of last scattering, while the right column shows the result when \(\eta_{\text{HS}}=\eta_{\text{rec}}\). The signal capture and background rejection rates in Fig. 8 refer
to \(\epsilon_{S,90^{2}}\) and \((1-\epsilon_{B,90^{2}})\). Clearly, for \(g\geq 3\), our CNN setup is highly efficient at separating CMB+PHS images from CMB images alone. For example, for \(g=3\) (the same coupling
Figure 8: Network output for 5k images without (blank histogram) and with (colored histograms) PHS signals. We count the image as an identified signal map when the network output \(>0.5\). In the plots we show the background rejection rate from the CMB-only analysis and the signal capture rate from the CMB+PHS images, for different inflaton-\(\chi\) couplings \(g\). The fake rate is defined as (\(1-\)background rejection rate). The plots on the left have both the hotspots distributed uniformly with separation \(\leq\eta_{*}\) and within \(\eta_{\rm HS}=\eta_{\rm rec}\pm\eta_{*}\), which is how we simulate the signal for the rest of the study. The signal capture rate therefore includes possible suppression due to hotspots moving off the last scattering surface. For comparison, we show the training results in the right plots requiring \(\eta_{\rm HS}=\eta_{\rm rec}\). Comparing results obtained from the same study but with different sets of 5k images, we find the efficiency numbers vary by \(\sim 0.1-1\%\).
as in the training sample) and \(\eta_{*}=160\,\)Mpc, \(\epsilon_{S,90^{2}}\) is over 73% with \(\epsilon_{B,90^{2}}\) less than 0.1%. For \(\eta_{*}=160\) Mpc and 100 Mpc, the signal capture rate falls if the hotspots are off the last scattering surface but in the \(\eta_{\rm rec}\pm\eta_{*}\) window we consider. When applying the same trained network on dimmer PHS signals (\(g<3\)), \(\epsilon_{S,90^{2}}\) drops, but the background rejection rate remains close to unity.
Both \(\epsilon_{S,90^{2}}\) and \(\epsilon_{B,90^{2}}\) vary with the horizon size. Comparing results for \(\eta_{*}=160\,\)Mpc to \(\eta_{*}=50\,\)Mpc, the \(\epsilon_{S,90^{2}}\) values are similar for \(g\geq 3\), but \(\eta_{*}=50\) Mpc case performs much better at weaker coupling (\(\epsilon_{S,90^{2}}=51.2\%\) for \(\eta_{*}=50\,\)Mpc compared to 1.8% for \(\eta_{*}=160\,\)Mpc, both for \(g=1\)). The \(\eta_{*}=50\,\)Mpc case has a larger background fake rate, compared to \(\eta_{*}=160\) Mpc. However, even if we incorporate the background and compare \(\epsilon_{S,90^{2}}/\sqrt{\epsilon_{B,90^{2}}}\) - the efficiency ratio is \({\cal O}(10)\) times larger for the dimmer, \(\eta_{*}=50\) Mpc case. The ability of catching dimmer signals indicates that the network uses additional information than the overall temperature to identify the PHS.
Although it is difficult to know exactly how the CNN identifies the PHS, the network seems to more accurately identify PHS with a distinct rim structure compared to just utilizing the fact that there are two hotspots (Fig. 5). One indication that the CNN utilizes the rim structure of the \(\eta_{*}=50\) Mpc signal is that the signal capture rate for that benchmark is insensitive to whether or not the PHS lie on the last scattering surface. We perform the same CNN analysis by having the signal hotspots centered on the last scattering surface (\(\eta_{\rm HS}=\eta_{\rm rec}\) in Eq. (21)) and summarize results in the right column of Fig. 8. For hotspots with temperature profile peaked at center, as we show in the \(\eta_{*}=160\) and 100 Mpc plots in Fig. 4, the highest PHS temperature takes the maximum value when \(\eta_{\rm HS}=\eta_{\rm rec}\) (orange). It then is reasonable to have a larger average signal capture rate when the hotspots center on the last scattering surface. However, as we illustrate in the upper left plot in Fig. 4, the "shell" of the \(\eta_{*}=50\) Mpc signal in 3D always project into a rim with a fixed temperature (at angle \(\approx 0.008\) rad), regardless of the location of the hotspot, \(\eta_{\rm rec},\eta_{\rm rec}+\eta_{*}\), or \(\eta_{\rm rec}-\eta_{*}\). Therefore, if the CNN identifies the \(\eta_{*}=50\) Mpc signal based on the rim structure, \(\epsilon_{S,90^{2}}\) should remain the same even when the PHS are on the last scattering surface. This is indeed what we see on the bottom plots in Fig. 8. Further study on what features the CNN uses to identify the \(\eta_{*}=50\,\)Mpc case can be found in Appendix C.
### Application of the Trained Network to Larger Sky Maps
After training the CNN to identify PHS in images with \(90^{2}\) pixels, we look for signals on a larger sky map by applying the same network analysis repeatedly across the larger map. In this way we can analyze, in principle, arbitrarily large maps. A benefit of such a larger map search is that it avoids the loss of sensitivity to signals where a PHS is partially cut out by the boundary of a \(90^{2}\)-pixels region. Such a PHS would be lost had we simply partitioned the sky into non-overlapping \(90^{2}\)-pixels regions.
For a concrete application, we study maps with \(720^{2}\) pixels6 using the following steps:
(i) we apply the trained network on the upper left corner of the map, obtaining the network output, (ii) we shift the \(90^{2}\)-pixels "window" to the right by 5 pixels and get the network output again, (iii) repeat the process until we hit the right hand side of the large map. Then, return to the upper left corner but slide the widow down by 5 pixels, (iv) continue with these steps until the entire larger map is covered. The result of steps (i) - (iv) result in what we call a "probability map". Starting with an original \(720^{2}\) image and scanning in steps of 5 pixels, the probability map has \(126^{2}\) entries, with each entry showing the probability of having a signal in a \(90^{2}\)-pixels region centered at each pixel. We have tried different step sizes and find that a 5 pixel step size yields nearly identical results to a 1 pixel step size for the following analysis, so we use the 5 pixel step size for improved computational speed.
Footnote 1: We use the \(\tau_{\rm{i}}\) to denote the \(\tau_{\rm{i}}\)’s in the \(\tau_{\
surface. As an example, let us take \(\eta_{*}=50\) Mpc and \(g=1\). From Table 1, we see \(\epsilon_{S,720^{2}}=54.6\%\) while \(\epsilon_{B,720^{2}}=1.4\%\). Assuming that only a fraction \(f_{\rm sky}=60\%\) is used for the search, the total number of signals for this benchmark is \(Sig=\epsilon_{S,720^{2}}\,N_{\rm PHS}\,f_{\rm sky}\), while the number of background events is \(Bg=25\,\epsilon_{B,720^{2}}\,f_{\rm sky}\), where the factor of 25 is the number of \(720^{2}\) patches needed to cover the full sky. From the number of signal and background events, we form the log-likelihood ratio [49; 50] and then solve for \(N_{\rm PHS}\) for the desired signal significance. When
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & \(\eta=50\) Mpc & \(\eta=100\) Mpc & \(\eta=160\) Mpc \\ \hline \(\epsilon_{B,720^{2}}\) & 1.4 \% & 11 \% & 6.6 \% \\ \hline \(\epsilon_{S,720^{2}}\), \(g=1\) & 54.6 \% & 0.8 \% & 0.5 \% \\ \hline \(\epsilon_{S,720^{2}}\), \(g=2\) & 84.0 \% & 34 \% & 34.6 \% \\ \hline \(\epsilon_{S,720^{2}}\), \(g=3\) & 98.6 \% & 76.8 \% & 71.2 \% \\ \hline \end{tabular}
\end{table}
Table 1: CNN result from scanning 500 randomly generated CMB or CMB+PHS maps using the network trained in Sec. 4.1. The image size is \(720^{2}\) pixels, and we shift the search window having \(90^{2}\)-pixels by 5 pixel steps. The fake rate is the average number of fake signals from a \(720^{2}\)-pixels map with CMB-only. The signal capture rate is the chance of identifying each input PHS signal. Comparing results obtained from the same study but with different sets of 500 images, we find the efficiency numbers vary by \(\sim 0.1-1\%\).
Figure 9: _Left_: PHS signals that are implanted on CMB map. _Right_: Probability map from scanning the same \(720^{2}\) image plus the CMB with the CNN search of \(90^{2}\)-pixels region shifting in steps of 1 pixel. The true and fake signals show up as clusters in the processed image. We further suppress the number of fake signals in the following analysis by applying cuts on the network output of each pixel and the pixel number in each cluster. We find the analysis from shifting the search window in steps of 5 pixels produce similar results to the steps of 1 pixel and therefore use 5 pixel steps for the rest of the analysis.
calculating the \(2\sigma\) exclusion bound, we require
\[\sigma_{exc}\equiv\sqrt{-2\,\ln\left(\frac{L(Sig\!+\!Bg|Bg)}{L(Bg|Bg)}\right)} \geq 2,\quad\text{ with }\;\;L(x|n)=\frac{x^{n}}{n!}e^{-x}\,. \tag{4.3}\]
Note that this is the expected bound, as we are taking simulated CMB background to be the number of observed events (\(n\) in Eq. (4.3)). The resulting values of \(N_{\text{PHS}}\) are given in the left panel of Table. 2. It is also interesting to determine how many PHS would be needed for discovery at each benchmark point. We calculate the expected discovery reach using
\[\sigma_{dis}\equiv\sqrt{-2\,\ln\left(\frac{L(Bg|Sig\!+\!Bg)}{L(Sig\!+\!Bg|Sig\! +\!Bg)}\right)}\geq 5\,. \tag{4.4}\]
The results are collected in Table 3.
We can further obtain the minimum mass \(M_{0}\) of the heavy particle corresponding to \(\sigma_{exc}\) and \(\sigma_{dis}\) using Eq. (2.7) and \(\Delta\eta=2\eta_{*}\).7 In Table 2 and 3, we show the bounds (or reach) on the number of PHS and \(M_{0}/H_{I}\). Due to the energy injection from the dynamics of the inflaton, we can probe scalar particles with masses up to \(\approx 260H_{I}\). In the bottom right tables, we show that the mass bounds correspond up to \(\approx 2.6\) times the mass-changing rate caused by the inflaton rolling (\(\sqrt{g\dot{\phi_{0}}}\) ), which dominates the exponential suppression in Eq. (2.7). We also plot the \(2\sigma\) lower bound on \(M_{0}/H_{I}\) in Fig. 10. Since the \(N_{\text{PHS}}\) depends on \(M_{0}\) exponentially, a slightly lower scalar mass than the \(2\sigma\) bound leads to a \(5\sigma\) discovery of the PHS.
Footnote 7: One subtlety in solving the mass bound is that when simulating the PHS signals, we require both hot spots to be within \(\pm\eta_{*}\) around the last scattering surface. Hence, the simulation excludes PHS with one of the hot spots outside of the shell region that would be harder to see by the CNN. However, when solving the upper bound on the PHS density using Eq. (2.7), we take into account the signals that are partially outside of the shell region, leading to an over-estimate of the signal efficiency and a stronger upper bound on the number density. From checking the hot spot distribution numerically, we find that \(\approx 17\%\) of the PHS in our examples can be partially outside of the \(\pm\eta_{*}\) region. Fortunately, since the size of \(M_{0}\) only depends on the number density bound logarithmically, the error only changes the \(M_{0}\) bound by up to \(1\%\). This is acceptable for the accuracy we want for the concept study.
These bounds are significantly improved compared to the previous analysis in Ref. [25]; this is not surprising given that the analysis in Ref. [25] was very simplistic, utilizing only a single temperature cut to separate signal from background. Using the CNN, we can now obtain meaningful bounds for \(g=1\), \(2\) - cases for which the PHS were rather invisible before. For hotter signals, e.g. \(g=3\), the CNN analysis beats the past result by \(\Delta M_{0}\approx 60H_{I}\). This is a notable improvement given that the PHS density is exponentially sensitive to the scalar mass (squared).
Finally, to show that the CNN search of localized objects gives a better probe of heavy particle production than the measurement of CMB temperature power spectra, we plot the corrections to the \(\Lambda\)CDM \(D_{\ell}^{\text{TT}}\) spectrum in Appendix B, including the same number of PHS in Table. 2. For example, for \(g=1,\;\eta_{*}=160\) Mpc, we see from Table 2 that the \(2\sigma\) bound on
\(N_{\rm PHS}\) from our CNN analysis is 1162 hotspot pairs. Injecting 1162 hotspots into the sky,8 we find a correction to \({\cal D}_{\ell}^{\rm TT}\) of \(\Delta\chi^{2}=0.3\) - well within the \(1\sigma\) band on Planck 2018 temperature power spectrum. Repeating this exercise with the other benchmarks in Table 2, yields \(\Delta\chi^{2}\) values that are even smaller.
Footnote 8: For simplicity, we restrict all hotspots to the last scattering surface. This somewhat overemphasizes the PHS correction to the power spectrum, as scenarios with both particles fixed to the last scattering surface are, on average, brighter than when \(\eta_{HS}\) varies.
### Comparison with a Matched Filter Analysis
Matched filter analysis is a standard tool for identifying localized signals on a CMB map. Given a 2D power spectrum of the CMB, \(P(k)\), we can obtain a filtered map \(\psi(\vec{r})\) in position space from a convolution between the original image (signal plus background) \(\zeta(\vec{k})\) and a
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Number of PHS & \(\eta=50\) & \(\eta=100\) & \(\eta=160\) \\ \hline \(g=1\) & 8 & 840 & 1162 \\ \hline \(g=2\) & 5 & 20 & 17 \\ \hline \(g=3\) & 4 & 9 & 8 \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|c|} \hline \(M_{0}/(g\dot{\phi}_{0})^{1/2}\) & \(\eta=50\) & \(\eta=100\) & \(\eta=160\) \\ \hline \(g=1\) & 2.5 & 2.0 & 2.0 \\ \hline \(g=2\) & 2.6 & 2.4 & 2.4 \\ \hline \(g=3\) & 2.6 & 2.5 & 2.4 \\ \hline \end{tabular}
\end{table}
Table 2: _Upper: \(2\sigma\) upper bound on the number of PHS in the whole CMB sky with both hotspot centers located within \(\eta_{\rm rec}\pm\eta_{*}\) window around the last scattering surface. In the calculation we assume sky fraction \(f_{\rm sky}=60\%\). Lower left: lower bounds on the bare mass of the heavy scalar field in units of the Hubble scale during the inflation. Lower right: lower bounds on the bare mass in units of the rate of the mass, \((g\dot{\phi}_{0})^{1/2}\), owing to the inflaton coupling._
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Number of PHS & \(\eta=50\) & \(\eta=100\) & \(\eta=160\) \\ \hline \(g=1\) & 16 & 2047 & 2757 \\ \hline \(g=2\) & 10 & 48 & 40 \\ \hline \(g=3\) & 9 & 21 & 19 \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|c|} \hline \(M_{0}/(g\dot{\phi})^{1/2}\) & \(\eta=50\) & \(\eta=100\) & \(\eta=160\) \\ \hline \(g=1\) & 2.4 & 2.0 & 1.9 \\ \hline \(g=2\) & 2.5 & 2.3 & 2.3 \\ \hline \(g=3\) & 2.6 & 2.4 & 2.4 \\ \hline \end{tabular}
\end{table}
Table 3: Same as Table 2 but for the \(5\sigma\) discovery reach.
signal filter \(h(\vec{k})\) (the Fourier transform of a profile \(h(\vec{r})\) in position space),
\[\psi(\vec{r})=\int\frac{d^{2}\vec{k}}{(2\pi)^{2}}\left(\frac{\zeta(\vec{k})h( \vec{k})}{P(k)}\right)\,e^{i\vec{k}\cdot\vec{r}}. \tag{4.5}\]
If the signal is spherically symmetric, the filter simplifies to \(h(\vec{k})=h(k)\). From the filtered map \(\psi(\vec{r})\) one can construct an optimal likelihood ratio test between the Gaussian null hypothesis and the existence of the signal (see e.g. [27]), making the matched filter ideal for picking out single (or more generally, non-overlapping) localized signals.
As we have seen, while the individual hotspots are spherically symmetric, they often overlap (at least for the range of parameters we are interested in), leading to a net signal in the sky that is no longer spherical. Additionally, the random separation between the initial heavy particles means the resulting PHS are not uniform. The unusual shape and variability among signals make the PHS less suitable for a vanilla matched filter analysis. While it may be possible to design a complicated and large bank of matched filters to cover the space of possible signal templates, the CNN analysis can effectively learn a set of flexible filters to enhance the signal over background even with varying and non-spherical signal shapes.
Even if the matched filter analysis defined in Eq. (4.5) is not optimal for the full pairwise hotspot signal, it is still instructive to compare a few examples of the matched filter analysis versus the CNN. For this comparison, we consider PHS that lie only on the last scattering surface. The combined signal from the PHS will still be non-spherical, but restricting all PHS to the last scattering surface does take away some of the variability among signals.9 While each hotspot in a pair will "pollute" the other - meaning that it appears as a background
Figure 10: Bound on the heavy scalar mass for \(\eta_{*}=50\) Mpc, \(\eta_{*}=100\) Mpc, and \(\eta_{*}=160\) Mpc. In the region above the ‘% Backreaction’ line, the backreaction to the inflationary dynamics due to particle production is smaller than a percent (see Ref. [25] for a more detailed discussion). The light blue lines show various contours of \(N_{\rm PHS}\). We notice that the projected CNN search is able to cover most of the parameter space up to the target \(N_{\rm PHS}=1\) contour.
that is different from the CMB fluctuations - each of the two hotspots can still be picked up effectively by the single spot template \(h(k)\).
We perform the comparison using \(90^{2}\) pixel images with one PHS injection. We use QuickLens to generate the CMB maps, which follows periodic boundary condition and thereby ensures the separation between \(k\)-modes in the 2D power spectrum \(P(k)\) of the CMB image. The CNN results for this signal set have already been shown in Sec. 4.1 and can be found in the right hand panels of Fig. 8; the background rejection is above 99% for all benchmark points, while the signal capture rate varies from a few percent to 100% depending on \(\eta_{*}\) and \(g\).
For the matched filter analysis, we obtain \(P(k)\) from the average of the discrete Fourier transform of 500 simulated images. We also apply discrete Fourier transform on the profile of a single hotspot in the PHS, and use it as \(h(k)\) in the convolution. Carrying out the integral in Eq. (4.5), we obtain the processed maps \(\psi(\vec{r})\). An example of the signal processing is shown in Fig. 11, where the plot on the left is the PHS signal (\(\eta_{*}=160\) Mpc and \(g=2\)), the middle is the signal plus background, and the right plot is the output image \(\psi(\vec{r})\). We see that the filter can indeed pick up the signal hidden inside the background.
As one way to quantify the matched filter results, in Fig. 12 we show the distribution of largest \(\psi(\vec{r})\) values in each of the 500 maps generated with (blue) and without (red) PHS signals with \(\{\eta_{*},g\}=\{160\,{\rm Mpc},2\}\) (left) and \(\{100\,{\rm Mpc},2\}\) (right). From this perspective, the matched filter clearly separates the signal and background for the two cases. We also perform the same analysis for the \(\eta_{*}=50\) Mpc signals (which have much lower temperatures). In this case, the overlap between signal and background in the \(\psi\) distribution is large, and a simple \(\psi\) cut is not the optimal way to separate the signal and background. For this reason, we only consider the \(\eta_{*}=100\) and 160 Mpc examples in the following discussion.
To provide a rough numerical comparison between the matched filter and the CNN analysis, we apply a \(\psi_{\rm max}\) cut in each of the matched filter histograms in Fig. 12. We choose the \(\psi_{\rm max}\) cut value to equal the background rejection rate in the CNN analysis, then compare
Figure 11: Example images from the matched filter analysis. _Left:_ PHS with \(\eta_{*}=160\) Mpc and \(g=2\). _Middle:_ Signal plus the background. _Right:_ Filtered map from the convolution integral Eq. (4.5).
signal capture rates in the two analyses. For the \(\eta_{*}=160\) Mpc example, the signal capture rate is about 5% and 74% for \(g=1\) and 2, while the match filter analysis performs slightly better, capture rates 8% and 98% respectively. For \(\eta_{*}=100\) Mpc, the CNN signal capture rates are \(\sim 10\%\) and \(\sim 69\%\) for \(g=1\) and 2, while the match filter analysis rates are slightly lower, 4% and 50%.
In summary, we find that the CNN performs very close to the matched filter analysis, suggesting that it is near optimal. The advantage of the CNN, as we have discussed, is that it can learn to interpolate between all signal shapes that appear in our model.10
Footnote 10: We believe the small differences between the CNN and matched filter signal rates are due to the simplicity of the analysis – where \(\psi_{\rm max}\) is used as a proxy for the matched filter performance.
## 5 Discussion and Conclusion
In this work, we show that Convolutional Neural Networks (CNN) provide a powerful tool to identify pairwise hotspots (PHS) on the CMB sky. These PHS can originate from superheavy particle production during inflation. We improve the previous analysis of Ref. [25] by more accurately modeling the distribution of PHS on the CMB sky and by developing a CNN-based signal search strategy.
To accurately model the PHS distribution, we include the possibility that PHS are distributed along the line-of-sight direction, rather than fixed to the last scattering surface. As a result, the average inter-spot separation within a PHS, when projected onto the CMB, is smaller than in Ref. [25]. For PHS with small values of \(\eta_{*}\), such as \(\eta_{*}=50\) Mpc, the two
Figure 12: Maximum pixel distribution in filtered maps, where the value \(\psi\) of the pixels on the filtered map is defined in Eq. (4.5). We use 500 CMB-only and 500 CMB+PHS maps and plot the distribution of the maximum \(\psi\) of each filtered map to show the separation between the CMB and CMB+PHS results. We used feature scaling also known as min-max normalization for \(\psi_{max}\), so that the smallest value is zero and the largest value is 1.
hotspots in a PHS significantly overlap with each other, and the resulting PHS look like a single object, but with a distinct angular profile (Fig. 5).
For the signal search, we construct a CNN to identify PHS from within the CMB, the standard fluctuations of which act as backgrounds for the signal. The network is trained on \(90^{2}\) pixel images with and without PHS injected in them (both with hotspots distributed in 3D, and with hotspots fixed on the last scattering surface). During training we choose a coupling \(g=3\), but the trained CNN can still identify PHS for smaller values of \(g\) with a significant signal capture rate and small background fake rate. We find that the CNN actually performs better for the smaller \(\eta_{*}\) benchmark, even though the hotspots are dimmer. We believe this is due to the distinctive ring structure the PHS have when \(\eta_{*}=50\,\mathrm{Mpc}\), as evidenced by comparing PHS signals distributed in 2D versus in 3D, and by studies testing the CNN on 'dot' and 'ring' test signals (Appendix C).
After developing the CNN for \(90^{2}\) pixel images, we apply it to larger \(720^{2}\) pixel maps, sliding \(90^{2}\) 'templates' in 5 pixel steps across the larger images to generate a probability map. In the probability map, each pixel is evaluated by the network multiple times. As a final step, we filter the probability map, only retaining clusters - groups of positive network outcomes - of a certain size. The benefit of the sliding template search is that it less sensitive to the exact position of the hotspot within the \(90^{2}\) pixel region. Applied in this manner, we find that the CNN can efficiently discern the presence of hotspots, even if the signal temperature is much smaller than the CMB temperature fluctuations. In particular, the CNN can even identify \(\mathcal{O}(10)\) number of PHS on the CMB sky for \(g=1\) and \(\eta_{*}=50\) Mpc, a signal that has a temperature \(\approx 20\) times colder than the average CMB temperature fluctuations. Translated into model parameters, for the benchmark models we study using mock CMB maps, we project that a CNN search can set a lower bound on the mass of heavy scalars \(M_{0}/H_{I}\gtrsim 110-260\), with the precise value depending on the time of particle production and coupling to the inflaton. These numbers are a significant improvement over the simplistic analysis in Ref. [25] that used single temperature cut to separate signal from the background.
Compared to the standard matched filter analysis, the CNN is more versatile in identifying non-rotationally symmetric signals with varying shapes and temperatures that arise in the context of PHS. We performed a simplified comparison between the CNN and matched filter analysis by considering PHS with a fixed profile and located on the last scattering surface to show that the match filter analysis can provide comparable signal capture and fake rates to the CNN search for PHS with \(\eta_{*}=160\) Mpc and \(100\) Mpc. For dimmer PHS (\(\eta_{*}=50\) Mpc), more analysis is required to separate the signal and background in the filtered map. We leave a more detailed comparison to the matched filter method with a bank of filters to cover the signal space to future work.
Several future directions remain to be explored. It would be interesting to apply our methodology to actual Planck CMB maps to search for PHS. In the absence of a detection, we can still set a lower bound on the masses of ultra-heavy particles which are otherwise very difficult to discover or constrain. This, however, requires a subtraction of the astrophysical foregrounds and knowing if the CNN can distinguish PHS from the compact objects in the
foreground. Since the distortion of the curvature perturbation from particle production also modifies structure formation at late times, it would also be interesting to see if the current or future Large Scale Structure (LSS) surveys can identify the resulting signals localized in position space. A neural network like the one used here can learn to incorporate the non-linear physics of structure formation if trained on suitable simulations. Related to localized PHS signatures, similar types of cosmological signals from topological defects [51] or bubble collisions [28; 29; 30] can also arise and these may also be identified by a CNN search. From a more theoretical perspective, it would also be useful to write down a complete inflationary model that incorporates inflaton coupling to heavy fields and leads to particle production as described here. We leave these directions for future work.
We thank Raphael Flauger, Daniel Green, Matthew Johnson, Kin-Wang Ng, Bryan Ostdiek, LianTao Wang, Yiming Zhong for useful conversations. TK, AM, and YT are supported by the U.S. National Science Foundation (NSF) grant PHY-2112540. JK is supported in part by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2021R1C1C1005076), and in part by the international cooperation program managed by the National Research Foundation of Korea (No. 2022K2A9A2A15000153, FY2022). SK is supported in part by the NSF grant PHY-1915314 and the U.S. Department of Energy (DOE) contract DE-AC02-05CH11231. MM is supported by the U.S. Department of Energy, Office of Science, under Award Number DE-SC0022342.
## Appendix A Sensitivity to the \(\Lambda\)CDM Parameters
Our analysis uses \(\Lambda\)CDM parameters in Eq. (10) to simulate the CMB. As the \(\Lambda\)CDM parameters come with uncertainties, we should check how sensitive the signal capture rate is to the variation of the parameters. In Table 4, we show the background rejection and signal capture rate using the same trained network for Fig 8_left_ with \(g=3\) and \(\eta_{*}=160\) Mpc but on CMB maps simulated with variations of \(\Lambda\)CDM parameters. As we see, when changing the \(\{A_{s},\Omega_{b},\Omega_{\rm CMB},n_{s}\}\) one by one with twice the \(1\sigma\) uncertainty reported in [39], the signal capture rate only changes by \(\mathcal{O}(\text{few}\,\%)\), comparable to the variations in our CNN analysis due to finite sampling. The consistent search results show the robustness of the network's ability to identify PHS against the uncertainty of \(\Lambda\)CDM parameters.
## Appendix B PHS Corrections to the CMB Power Spectrum
Here we show the corrections on the CMB power spectrum when the number of PHS in the full sky saturates the bounds in Table 2. We show examples with the coupling \(g=1\) and horizon sizes \(\eta_{*}=100\) Mpc (\(N_{\rm PHS}=840\)) and \(160\) Mpc (\(N_{\rm PHS}=1162\)), assuming the centers of all the hotspots are located on the last scattering surface. Notice that the latter
assumption of fixing \(\eta_{\rm HS}=\eta_{\rm rec}\) makes the average PHS temperature higher compared to the main analysis that allows \(\eta_{\rm HS}\) to vary. However, the assumption simplifies the power spectrum calculation and gives a more conservative result by exaggerating the PHS correction to the power spectrum. We also check results for different \(g\) and \(\eta_{*}\), but, following Table 2, with much smaller \(N_{\rm PHS}\). The corrections to the power spectrum for the other benchmarks are even smaller.
To see how the excesses appear on the power spectrum, we utilize Hierarchical Equal Area isoLatitude Pixelization, HEALPix[40], based on the \(C_{\ell}^{\rm TT}\) spectrum computed from the CLASS package using the same \(\Lambda\)CDM parameters in Eq.(28). HEALPix pixelates a sphere
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline & \(\omega_{b}\) & \(\omega_{\rm cdm}\) & \(10^{9}A_{s}\) & \(n_{s}\) & \(\tau_{re}\) & Bg rejection & Sig capture \\ \hline \hline Planck18 & 0.0224 & 0.120 & 2.10 & 0.966 & 0.0543 & 99.8\% & 74.0\% \\ \hline Case 1 & & +0.004 & & & 99.8\% & 72.3\% \\ \hline Case 2 & & & +0.07 & & & 99.2\% & 74.1\% \\ \hline Case 3 & & & & +0.01 & & 99.6\% & 69.9\% \\ \hline Case 4 & +0.0003 & & & & & 99.8\% & 73.4\% \\ \hline Case 5 & & & & & +0.014 & 99.2\% & 74.4\% \\ \hline Case 6 & +0.0003 & \(-0.004\) & +0.05 & \(-0.01\) & \(-0.014\) & 99.8\% & 72.4\% \\ \hline \end{tabular}
\end{table}
Table 4: The response of the signal capture and background rejection rates with varying \(\Lambda\)CDM parameters, labeled with the difference to the \(\Lambda\)CDM parameters. The variation of the rates is comparable to the fluctuations in our CNN analysis due to finite sampling and therefore is insignificant. For this test, we used \(g=2\) and \(\eta_{*}=160\) Mpc for the PHS signal.
Figure 13: CMB temperature power spectrum using best fit \(\Lambda\)CDM input parameters in Eq. (28) with (red lines) and without (blue lines) PHS signals implemented on the full sky using a resolution parameter \(N_{\rm side}=2048\). Here, we assume that all PHS signals are on the last scattering surface. The differences between the two distributions are shown in green lines, and the gray shaded regions denote \(1\sigma\) uncertainty, taken from the Planck 2018 data.
in an equal area where the lowest resolution consists of 12 baseline pixels. The resolution is increased by dividing each pixel into four partitions which can be parameterized as \(N_{\rm pixels}=12N_{\rm side}^{2}\) where \(N_{\rm side}\) is a power of 2. We choose the resolution parameter \(N_{\rm side}=2048\). Since the total number of pixels in a sphere characterizes the total number of independent \(\ell\) modes in \(C_{\ell}^{\rm TT}\), which is given by \(\sum_{\ell=0}^{\ell_{\rm max}}(2\ell+1)=(\ell_{\rm max}+1)^{2}\), our benchmark resolution parameter \(N_{\rm side}=2048\) corresponds to the maximum multipole number \(\ell_{\rm max}\simeq 3500\).
Figure 13 shows \(\mathcal{D}_{\ell}^{\rm TT}\) spectra for the \(\Lambda\)CDM model (blue) and the \(\Lambda\)CDM+PHS (red) with \(\eta_{*}=100\) Mpc and \(\eta_{*}=160\) Mpc. The difference between the red and blue spectra is shown on the lower panel (green), with the \(1\sigma\) error bar (gray) taken from the Planck 2018 result [39]. For both scenarios, the excesses are well below the error bar indicating that the power spectrum analysis will not be able to resolve them. We also show \(\Delta\chi^{2}\) to quantify the deviations with respect to the \(\Lambda\)CDM spectrum using the same Planck 2018 binning intervals in \(\ell\). The total \(\Delta\chi^{2}\) for both cases is negligible compared to the number of parameters we have.
## Appendix C Shape Analysis for the \(\eta_{*}=50\) Mpc Signal
In our earlier results, we found that the CNN's performance for \(\eta=50\) Mpc PHS exceeds the other benchmarks, despite the fact that the hotspots at \(\eta=50\) Mpc are much cooler. We surmise that the result is due to the distinct shape of the profile - a rim structure with central peak. As a simple test of this hypothesis, we formed a signal set of PHS decomposed into two separate features, an inner peak and an outer rim. We then ran each piece through a network trained on the complete shape of the \(\eta=50\) Mpc spots.
Figure 14: In the left panel we show the trimmed inner piece of a hotspot signal, while in the right we show the output after 500 CMB + inner hotspot images are run through a network trained on full (untrimmed) \(\eta_{*}=50\,{\rm Mpc},g=3\) hotspots.
We ran 500 CMB + deconstructed PHS test samples through the network, using a variety of \(g\) values but always with both located on the last scattering surface. The results, along with sample images of the deconstructed signals, are shown in Figs. 14 and 15. Comparing the right hand panels in Figs. 14 and 15, we see that the network is much more efficient at capturing the ring portion, e.g. 88% capture for \(g=3\) compared to 27% for the central spot. From this test we conclude that the ring shape is crucial to the CNN's performance at low \(\eta_{*}\) (note that the signal capture for the ring nearly matches the capture rate for the full signal (Fig. 8)).
|
2307.09058 | Physical interpretation of neural network-based nonlinear eddy viscosity
models | Neural network-based turbulence modeling has gained significant success in
improving turbulence predictions by incorporating high--fidelity data. However,
the interpretability of the learned model is often not fully analyzed, which
has been one of the main criticism of neural network-based turbulence modeling.
Therefore, it is increasingly demanding to provide physical interpretation of
the trained model, which is of significant interest for guiding the development
of interpretable and unified turbulence models. The present work aims to
interpret the predictive improvement of turbulence flows based on the behavior
of the learned model, represented with tensor basis neural networks. The
ensemble Kalman method is used for model learning from sparse observation data
due to its ease of implementation and high training efficiency. Two cases,
i.e., flow over the S809 airfoil and flow in a square duct, are used to
demonstrate the physical interpretation of the ensemble-based turbulence
modeling. For the flow over the S809 airfoil, our results show that the
ensemble Kalman method learns an optimal linear eddy viscosity model, which
improves the prediction of the aerodynamic lift by reducing the eddy viscosity
in the upstream boundary layer and promoting the early onset of flow
separation. For the square duct case, the method provides a nonlinear eddy
viscosity model, which predicts well secondary flows by capturing the imbalance
of the Reynolds normal stresses. The flexibility of the ensemble-based method
is highlighted to capture characteristics of the flow separation and secondary
flow by adjusting the nonlinearity of the turbulence model. | Xin-Lei Zhang, Heng Xiao, Solkeun Jee, Guowei He | 2023-07-18T08:21:08Z | http://arxiv.org/abs/2307.09058v1 | # Physical interpretation of neural network-based nonlinear eddy viscosity models
###### Abstract
Neural network-based turbulence modeling has gained significant success in improving turbulence predictions by incorporating high-fidelity data. However, the interpretability of the learned model is often not fully analyzed, which has been one of the main criticism of neural network-based turbulence modeling. Therefore, it is increasingly demanding to provide physical interpretation of the trained model, which is of significant interest for guiding the development of interpretable and unified turbulence models. The present work aims to interpret the predictive improvement of turbulence flows based on the behavior of the learned model, represented with tensor basis neural networks. The ensemble Kalman method is used for model learning from sparse observation data due to its ease of implementation and high training efficiency. Two cases, i.e., flow over the S809 airfoil and flow in a square duct, are used to demonstrate the physical interpretation of the ensemble-based turbulence modeling. For the flow over the S809 airfoil, our results show that the ensemble Kalman method learns an optimal linear eddy viscosity model, which improves the prediction of the aerodynamic lift by reducing the eddy viscosity in the upstream boundary layer and promoting the early onset of flow separation. For the square duct case, the method provides a nonlinear eddy viscosity model, which predicts well secondary flows by capturing the imbalance of the Reynolds normal stresses. The flexibility of the ensemble-based method is highlighted to capture characteristics of the flow separation and secondary flow by adjusting the nonlinearity of the turbulence model.
keywords: Machine learning, turbulence modeling, ensemble Kalman inversion, physical interpretability +
Footnote †: journal:
## 1 Introduction
Data-driven turbulence modeling has emerged as an important approach for predicting turbulent flows [1], which constructs functional mappings from mean velocity to the Reynolds stress by incorporating observa
tion data. Over the past few years, this paradigm of turbulence modeling has been pursued from various aspects, including choice of training data, development of training strategy, and representative form of the Reynolds stress. As for the training data, both the Reynolds stress and velocity data have been used for learning turbulence models. The velocity data become advocated for model learning as they are relatively straightforward to obtain in practical applications compared to the Reynolds stress data [2]. Regarding the training strategies, the conventional _a priori_ approach [3; 4; 5] trains neural network-based models without involving the RANS solver, which is pointed out [6] to have inconsistency issues in posterior tests. For this reason, the model-consistent training [7; 8; 9; 2; 10; 11] has been proposed to improve the predictive abilities of learned models by coupling the neural network and the RANS equation during the training process. Besides the two research lines mentioned above, the representative form of turbulence closure has also been investigated to empower the model with generalizability across different classes of flows. It is one critical step toward the ultimate goal of discovering unified turbulence models from data.
Various strategies have been proposed to represent the Reynolds stress, such as neural-network-based multiplicative correction [12], eigen perturbation method [13; 14; 15], symbolic expression [16], tensor basis neural network [4] and so on. Specifically, the neural-network-based multiplicative correction is introduced to modify turbulent production terms in turbulence transport equations, which can improve the velocity prediction of separated flows but is still under the Boussinesq assumption. The eigen perturbation method is proposed to present the Reynolds stress based on the eigen decomposition of the Reynolds stress tensor. The obtained eigenfunctions have physical interpretations to indicate the magnitude, shape, and orientation of the Reynolds stress tensor. This representation is a general form to represent the Reynolds stress but requires careful selection of the input features to ensure the Galilean invariance. To overcome these limitations, the nonlinear eddy viscosity model is often used as the base model, which is beyond the Boussinesq assumption and regards scalar invariants associated with velocity gradients as model inputs. Different techniques, including symbolic expression and neural networks, have been introduced to represent the Reynolds stress based on the nonlinear eddy viscosity model. In this work, we focus on the neural network-based representation, i.e., tensor basis neural network [4].
The tensor basis neural network is able to represent the anisotropy of the Reynolds stress flexibly due to its great expressive power. Neural networks have expressive power that increases exponentially with the depth of the network. Hence, it has the potential to achieve a universal or at least unified model to represent various flow characteristics. That is, one model form is applicable to multiple classes of flows, such as attached flows, separated flows, and corner flows, possibly with internal switching or branching. While such universality is not the objective of this work, it is appealing to have such possibilities in the future. However, the tensor basis neural network has intrinsic drawbacks due to the weak equilibrium assumption and the black-box feature. On the one hand, although the tensor basis neural network is the most general
nonlinear eddy viscosity model, it is still a local model under the weak equilibrium assumption. That is, the Reynolds stress anisotropy only depends on the local velocity gradient. To address this issue, the vector-cloud neural network [17] has been proposed to enforce the nonlocal dependence in the representative form. On the other hand, the trained neural network is still a black box and has encountered an interpretability crisis for neural network-based turbulence modeling. Therefore, it is of significant necessity to interpret the physical mechanism behind the learned neural network and guide the development of turbulence closures.
In this work, we aim to physically interpret the behavior of the learned turbulence model in terms of predictive improvement. Neural networks can represent complex functional relationships between physical quantities but have poor interpretability on the learned model behavior. In contrast, symbolic models are often assumed as interpretable since they can provide the causes and effects of the model behavior in the _a priori_ sense. It is noted that when the learned symbolic model provides a complicated expression that is highly composited or has many high-order terms, which would also be difficult to interpret. Some post-hoc approaches, such as the Shapley additive explanations (SHAP) method [18], have been proposed to interpret the black-box neural network models. These methods can indicate the importance value of each input feature on the neural network output [19] in the _a posteriori_ sense. However, they cannot provide physical insights into the mechanism of the learned model for improving the RANS prediction.
In this work, we investigate the physical interpretability of the learned turbulence model, represented with tensor basis neural networks [4]. The ensemble Kalman method is adopted to learn turbulence models from sparse observation data, including the lift force and velocity. We show that the behavior of the learned neural network is physically interpretable to improve flow predictions on two canonical flows, i.e., separated flow in the S809 airfoil and secondary flow in a square duct. The ensemble method can adjust the nonlinearity of the learned model to capture the different flow characteristics. Moreover, the capability of the ensemble Kalman method is shown in learning turbulence models from very sparse observation data. In addition, the normalization strategy is investigated to avoid feature clustering due to the stagnation point of airfoil flows. We note that the interpretability in this work refers to the model behavior of the trained neural network. It is different from the interpretability of neural networks in the machine learning community, which aims to present the features of neural networks in an understandable term, e.g., indicating the importance of input features with specific contribution values [18].
The rest of the paper is outlined as follows. The ensemble-based modeling methodology is elaborated in Section 2. The case setups and the training results are presented in Section 3 and 4, respectively. Finally, the paper is concluded in Section 5.
## 2 Methodology
For incompressible turbulent flows, the mean flow can be described by the RANS equation as
\[\begin{split}\nabla\cdot\mathbf{u}&=0\\ \mathbf{u}\cdot\nabla\mathbf{u}&=-\nabla p+\nu\nabla^{2}\mathbf{u }-\nabla\cdot\mathbf{\tau},\end{split} \tag{1}\]
where \(p\) is the mean pressure normalized by the flow density, \(\mathbf{u}\) is the velocity vector, \(\nu\) represents the molecular viscosity, and \(\mathbf{\tau}\) indicates the Reynolds stress1 to be modeled. Here we aim to construct neural-network-based turbulence models by incorporating available observations, such as lift force and velocity measurements. In the following, we introduce the Reynolds stress representation and the ensemble-based training method adopted in this work.
Footnote 1: Here we followed Pope’s convention [20] of defining Reynolds stress as the covariance of the velocity fluctuations i.e., \(\tau_{ij}=\left\langle u^{\prime}_{i}u^{\prime}_{j}\right\rangle\). We note that in the literature (e.g., [21]) it is more common to call \(-\left\langle u^{\prime}_{i}u^{\prime}_{j}\right\rangle\) the Reynolds stress because of its role in the RANS momentum equations.
### Neural-network-based turbulence closure
The tensor basis neural network [4] is used to represent the Reynolds stress due to the flexibility to represent the anisotropy of Reynolds stress. In the tensor basis neural network, the Reynolds stress \(\mathbf{\tau}\) is decomposed into a deviatoric part and an isotropic part, as
\[\begin{split}\mathbf{\tau}&=2k\sum_{\ell=1}^{10}g^{( \ell)}\mathbf{T}^{(\ell)}+\frac{2k}{3}\mathbf{I},\\ \text{with}\quad g^{(\ell)}&=g^{(\ell)}\left(\theta _{1},\dots,\theta_{5}\right),\end{split} \tag{2}\]
where \(k\) is the turbulent kinetic energy, \(\mathbf{T}\) is the tensor basis, \(g^{(\ell)}\) is the coefficient of the tensor basis to be determined, \(\mathbf{\theta}\) is the scalar invariants, and \(\mathbf{I}\) is the identity matrix. The \(g\) functions are represented with neural networks in this work which approximates functional mappings from the scalar invariants \(\mathbf{\theta}\) to the basis coefficients. There are ten independent tensor bases based on the Cayley-Hamilton theory [22] and five scalar invariants for incompressible flows. In the 2D scenario, only two scalar invariants and three tensor bases are remained [20]. Further, the third tensor basis can be incorporated in the pressure term for incompressible flows, leaving only two scalar invariants. The first four tensor bases can be written as
\[\begin{split}\mathbf{T}^{(1)}&=\hat{\mathbf{S}}, \qquad\mathbf{T}^{(2)}=\hat{\mathbf{S}}\hat{\mathbf{W}}-\hat{\mathbf{W}}\hat{ \mathbf{S}},\\ \mathbf{T}^{(3)}&=\hat{\mathbf{S}}^{2}-\frac{1}{3} \{\hat{\mathbf{S}}^{2}\}\mathbf{I},\qquad\mathbf{T}^{(4)}=\hat{\mathbf{W}}^{2 }-\frac{1}{3}\{\hat{\mathbf{W}}^{2}\}\mathbf{I}.\end{split} \tag{3}\]
In the formula above, \(\{\cdot\}\) denotes the trace operator, and the \(\hat{\mathbf{S}}\) and \(\hat{\mathbf{W}}\) are the normalized strain rate and the rotation rate based on the turbulence time scale \(\tau_{s}\), i.e.,
\[\begin{split}\hat{\mathbf{S}}&=\tau_{s}\mathbf{S} \quad\hat{\mathbf{W}}=\tau_{s}\mathbf{W}\\ \text{with}&\mathbf{S}&=\frac{1}{2}( \nabla\mathbf{u}+\nabla\mathbf{u}^{\top})\quad\text{and}\quad\mathbf{W}=\frac{1}{2}( \nabla\mathbf{u}-\nabla\mathbf{u}^{\top}).\end{split} \tag{4}\]
The time scale \(\tau_{s}\) can be estimated with the turbulent kinetic energy \(k\) and the dissipation rate \(\varepsilon\) or the specific dissipation rate \(\omega\). It is noted that the time scale becomes zero as we approach the wall. Hence one can bound the time scale with the Kolmogorov scale [23] as
\[\tau_{s}=\max\left(\frac{k}{\varepsilon},C_{\tau}\sqrt{\frac{\nu}{\varepsilon} }\right), \tag{5}\]
where \(C_{\tau}\) is constant and set as 6 in this work.
### Normalization of input features
The input features of the neural networks should be scaled within \([-1,1]\) to accelerate the training convergence. The min-max normalization is able to confine the input features within the range of \([0,1]\) (see e.g., Ref. [2]). A normalized feature \(\hat{\theta}\) can be formulated as \(\hat{\theta}=(\theta-\theta_{\min})/(\theta_{\max}-\theta_{\min})\), where the subscript'min' and'max' indicate the minimum and maximum value of a given feature \(\theta\). However, when there exist singular points with extremely large magnitudes in computational domains, this normalization strategy can lead to severe feature clustering. For instance, the velocity gradient near a stagnation point can have an extremely large value. Using the global maximum value to normalize entire input features will lead to most feature values clustering around 0, which would significantly affect the training performance.
In this work, the scalar invariants \(\hat{\mathbf{\theta}}\) are normalized with the local time scale \(\tau_{s}\) [e.g., 3; 15; 24] based on
\[\begin{split}\hat{\theta}_{1}&=\{\tilde{\mathbf{S} }^{2}\},\qquad\hat{\theta}_{2}=\{\tilde{\mathbf{W}}^{2}\},\\ \tilde{\mathbf{S}}&=\frac{\mathbf{S}}{\|\mathbf{S} \|+1/\tau_{s}},\quad\text{and}\qquad\tilde{\mathbf{W}}=\frac{\mathbf{W}}{\| \mathbf{W}\|+1/\tau_{s}}.\end{split} \tag{6}\]
With this specific normalization, the scalar invariants can be scaled within \([-1,1]\) to avoid feature clustering along certain directions. The normalized scalar invariants \(\mathbf{\theta}\) are used as the neural network inputs, and the coefficients \(g\) of the tensor bases are regarded as the outputs. Further, the neural network outputs \(g\) are combined with the tensor bases \(\mathbf{T}\) to form the anisotropic part of the Reynolds stress. The obtained Reynolds stress is used to predict the velocity and pressure fields by solving the RANS equations. Moreover, the constructed Reynolds stress \(\mathbf{\tau}\) is used to compute the turbulence production term in the turbulent kinetic energy and dissipation rate transport equations. Further, the neural network weights are optimized by incorporating observation data based on the ensemble Kalman method, which will be illustrated in the following subsection.
### Model-consistent training with ensemble Kalman method
Model-consistent training couples a neural network and a CFD solver during the training process. By doing this, it can ensure consistency between the training and prediction environments, thereby alleviating the ill-conditioning of the RANS model operator [25]. Moreover, this strategy can leverage sparse observation data, e.g., velocity measurements, to train the neural network-based model. This is in contrast to the prior training where the model is often trained with the full field data of the Reynolds stress and has poor generalizability due to the inconsistency issue [10]. The model-consistent training amounts to finding the optimal weights of neural networks that lead to the best fit with the sparse observation data.
Various training methods can be used to perform the model-consistent training, including the adjoint method [7], the ensemble method [10], and the genetic programming method [8]. We use the ensemble Kalman method for model training due to its non-derivative nature and good training efficiency. The ensemble method is a statistical inference method that uses an ensemble of samples to guide the optimization [26], which has been used for the physical modeling of subsurface flows [27] and turbulent flows with high Reynolds numbers [11; 24]. We use this method to train the turbulence model represented with the tensor basis neural network. The update scheme of the ensemble Kalman method can be formulated as
\[\begin{split}&\mathbf{w}_{j}^{i+1}=\mathbf{w}_{j}^{i}+\mathsf{K}(\mathsf{ y}_{j}-\mathsf{H}\mathbf{w}_{j}^{i})\\ &\text{with}\quad\mathsf{K}=\mathsf{PH}^{\top}(\mathsf{H}\mathsf{ PH}^{\top}+\mathsf{R})^{-1}.\end{split} \tag{7}\]
Herein \(\mathsf{H}\) is the local gradient of the model prediction \(\mathcal{H}[\mathbf{w}]\) with respect to the weights of neural networks \(\mathbf{w}\), \(\mathsf{P}\) is the model error covariance, \(\mathsf{R}\) is the observation error covariance, \(\mathsf{y}\) is the observation data, and \(i\) and \(j\) represent the index of optimization iteration and sample, respectively. The model operator \(\mathsf{H}\) is often avoided to be computed by reformulating the Kalman gain matrix as
\[\mathsf{K}=\mathsf{S}_{w}\mathsf{S}_{y}^{\top}(\mathsf{S}_{y}\mathsf{S}_{y}^{ \top}+\mathsf{R})^{-1}.\]
The square-root matrices \(\mathsf{S}_{w}\) and \(\mathsf{S}_{y}\) are defined as
\[\mathsf{S}_{w}^{i} =\frac{1}{\sqrt{N_{e}-1}}\left[\mathbf{w}_{1}^{i}-\overline{\mathbf{w}}^ {i},\mathbf{w}_{2}^{i}-\overline{\mathbf{w}}^{i},\cdots,\mathbf{w}_{N_{e}}^{i}-\overline{ \mathbf{w}}^{i}\right], \tag{8a}\] \[\mathsf{S}_{y}^{i} =\frac{1}{\sqrt{N_{e}-1}}\left[\mathcal{H}[\mathbf{w}_{1}^{i}]- \mathcal{H}[\overline{\mathbf{w}}^{i}],\mathcal{H}[\mathbf{w}_{2}^{i}]-\mathcal{H}[ \overline{\mathbf{w}}^{i}],\cdots,\mathcal{H}[\mathbf{w}_{N_{e}}^{i}]-\mathcal{H}[ \overline{\mathbf{w}}^{i}]\right],\] (8b) \[\overline{\mathbf{w}}^{i} =\frac{1}{N_{e}}\sum_{j=1}^{N_{e}}\mathbf{w}_{j}^{i}, \tag{8c}\]
which are estimated from the samples at every iteration. In this work, we use the ensemble-based Kalman update scheme for learning turbulence models in a model-consistent manner. As pointed out in Ref. [10], in scenarios having large data sets, e.g., time-dependent three-dimensional flow fields, the present algorithm would be computationally expensive and need to incorporate reduced-order techniques such as the truncated singular value decomposition [28].
Note that the ensemble Kalman method can train the neural network-based model with multiple observation data, including the measurements at various flow conditions. Specifically, we can incorporate the observation data at different flow conditions sequentially. It is achieved by training neural networks with each observation data in several inner loops. The maximum iteration number of the inner loop is set as 3 in this work based on our sensitivity study. Moreover, the observation data are shuffled randomly before training, which allows escaping from local minima similar to the stochastic gradient descent method [29]. Further, the Kalman update scheme is used to incorporate the observation data in the shuffled order till the entire data sets are traversed. After that, the observation data will be reshuffled and continue to be incorporated with the ensemble Kalman method. The practical implementation of the ensemble method is presented in A. One can also augment the observation with data from different flow conditions. However, this may drop into local minima and lead to unsatisfactory predictive accuracy in certain cases since the ensemble method aims to reduce the L2 norm of the total data misfit. By shuffling the training data, the method is able to find the global minimum and provide more accurate turbulence models based on our numerical tests.
## 3 Case setup
We use two cases to demonstrate the physical interpretation of the ensemble-based turbulence modeling, i.e., the flow over the S809 airfoil and the flow in a square duct. The two cases represent canonical separated flows and secondary flows, respectively. Both are challenging for conventional linear eddy viscosity models. The distinct flow characteristics are able to examine the flexibility of the ensemble-based method in learning interpretable models from partial observation. The details of the case setup are described in the following subsections.
### Flow over S809 airfoil
Flow over the S809 airfoil has been widely used for numerical validation of turbulence models as well as their data-driven counterparts [12; 7]. Such flows are challenging for linear eddy viscosity models at large angles of attack due to the flow separation. Conventional RANS models cannot accurately predict the massive flow separation, which further leads to the overestimation of the lift force beyond the stall angle [30]. Here we aim to interpret the behavior of the neural network-based model learned from lift force measurements with the ensemble Kalman method.
The Reynolds number is \(Re_{c}=2\times 10^{6}\) based on the inflow velocity and chord length. The angle of attack \(\alpha\) varies from 1\({}^{\circ}\)-18\({}^{\circ}\). At large angles of attack, conventional turbulence models underestimate the separation zones [30], which leads to large discrepancies in the predictions of the lift force [30]. The unstructured mesh with around 78000 cells is used to discretize the computational domain. The mesh grid
from the work [7] is adopted in the work as shown in Fig. 1. The no-slip condition is employed on the airfoil surface. The height of the first cell in the normal direction corresponds to \(y^{+}\approx 1\).
For the flow around the S809 airfoil, the available measurement data is the lift force, which is the integral type data source. Such limited observation would increase the ill-posedness of the inverse problems [31; 32; 33]. That is, different model functions can provide similar lift forces. Further, the learned model could have poor predictive accuracy and robustness due to the ill-posedness issue. To alleviate the issue, we use observations at two angles of attack, i.e., \(8.2^{\circ}\) and \(14.24^{\circ}\) to train the model. The former is attached flows, while the latter is separated flows. Learning from both the attached and separated flows can provide a model with better predictive ability.
As for the setup of the ensemble-based learning algorithm, the number of samples is taken as 50. The initial relative variance of the samples is set as 0.1, which is used to draw the random samples. The measurements of the lift force [34] are used as training data. The relative observation error is set as 0.01. The first two scalar invariants \(\theta_{1}\) and \(\theta_{2}\), and the first two tensor coefficients \(g^{(1)}\) and \(g^{(2)}\) are used as the inputs and the outputs of the neural network, respectively. We use the \(k\)-\(\omega\) model [21] as the baseline model, which is suitable for complex boundary layer flows with adverse pressure gradient compared to standard \(k\)-\(\varepsilon\) model [35].
### Flow in a square duct
The secondary flow in a square duct is mainly driven by the imbalance of the Reynolds normal stresses \(\tau_{yy}-\tau_{zz}\)[36]. The linear eddy viscosity model is not able to predict the secondary flow since it cannot well estimate the anisotropy of the Reynolds stress. We use this case to demonstrate the flexibility of the ensemble
Figure 1: Computational domain and mesh grid for computations of flows over the S809 airfoil
method in building interpretable nonlinear models from sparse observation data of secondary flows.
The Reynolds number based on the bulk velocity and half of the duct height is \(Re_{h}=3500\) for this case. Only one quadrant of the physical domain is simulated, considering the symmetry of the flow to the centerlines along \(y-\) and \(z-\)axes. The mesh with \(50\times 50\) is used to discretize the domain. The non-slip condition is imposed on the wall, and the symmetry condition is imposed at the symmetry boundary.
As for the setup of the ensemble-based learning algorithm, the number of samples is set as 50. The initial variance of the weights is set as 0.1. The DNS data [37] are used to train the neural network-based model. The velocity profiles at \(y/H=0.25,0.5,0.75,1.0\) are regarded as the observation data. The total number of observation data points is 200. The relative observation error is set as 0.01. For this case, we use the first two scalar invariants \(\theta_{1}\) and \(\theta_{2}\) as the input features, and the first four tensor coefficients \(g^{(1-4)}\) as the model outputs. Compared to the linear eddy viscosity model, the tensor bases of \(\mathbf{T}^{(2)}\), \(\mathbf{T}^{(3)}\) and \(\mathbf{T}^{(4)}\) are introduced to capture the anisotropy of the Reynolds normal stress, thereby producing the secondary flow. The \(k\)-\(\varepsilon\) model [35] is used as the baseline model in this case, since this model is often taken as the base of nonlinear eddy viscosity model for secondary flows [38; 39].
In this work, the open source CFD library OpenFOAM [40] is used to solve the RANS equations with turbulence models. Specifically, the built-in solver _simpleFOAM_ is used to solve the RANS equations, given the Reynolds stress fields. The Reynolds stresses are constructed with the neural networks, and the scalar invariants from the RANS computation are taken as inputs of the networks. The weights of the neural network are updated with the observation data based on the ensemble Kalman method. The TensorFlow [41] library is used to construct the neural network, and the DAFI code [42] is used to implement the ensemble Kalman method. The test cases and the weights of the learned neural network are publicly available [43] for reproducibility.
Figure 2: Computational domain and mesh grid for the fully–developed square duct case.
## 4 Results
### Flows over S809 airfoil
#### 4.1.1 Training performance
The learned model can improve the predictions of the lift force compared to the baseline \(k\)-\(\omega\) model. The predicted lift with the learned and baseline models at two chosen angles of attack is listed in Table 2. The baseline model predicts the aerodynamic lift \(C_{l}=1.25\) at the angle of attack \(14.24^{\circ}\), which deviates much from the experimental observation \(C_{l}=1.05\)[34] due to the massive flow separation. In contrast, the learned model provides \(C_{l}=1.07\), which is in good agreement with the experimental data. At \(\alpha=8.2^{\circ}\), the boundary layer is attached, and the baseline model provides good prediction with \(C_{l}=0.97\). The learned model predicts \(C_{l}=1.0\), which is slightly deviated from the observation \(C_{l}=0.95\). That is because the learning method decreases the data misfit at the two flow conditions simultaneously. The significant decrease of lift force at the angle \(\alpha=14.24^{\circ}\) is achieved with the sacrifice of a slight discrepancy at \(\alpha=8.2^{\circ}\). In general, the learned model can provide good predictions close to the experimental measurements, which is not surprising since the experimental data are used to train the model function.
The prediction of the pressure coefficient \(C_{p}\) is improved with the learned model compared to the baseline model. Figure 3 shows the predicted \(C_{p}\) with comparison among the learned model, the baseline \(k\)-\(\omega\) model, and the experimental data. It can be seen that the baseline \(k\)-\(\omega\) model can predict well the pressure
\begin{table}
\begin{tabular}{c|c|c} \hline Cases & S809 airfoil & Square duct \\ \hline mesh counts & \(\approx 78000\) & 2500 \\ Reynolds number & \(Re_{c}=2\times 10^{6}\) & \(Re_{h}=3500\) \\ data & \(C_{l}\) (at \(\alpha=8.2^{\circ}\) and \(14.24^{\circ}\)) & \(\mathbf{u}\) (at \(y/H=0.25,0.5,0.75,1\)) \\ baseline model & \(k\)–\(\omega\) & \(k\)–\(\varepsilon\) \\ initial relative variance & 0.1 & 0.1 \\ relative observation error & 0.01 & 0.01 \\ sample size & 50 & 50 \\ \hline \end{tabular}
\end{table}
Table 1: Computational parameters used in the flow around the S809 airfoil and the flow in a square duct
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline & \(\alpha\) & baseline \(k\)–\(\omega\) & learned model & experiment [34] \\ \hline \(C_{l}\) & \(14.24^{\circ}\) & 1.25 & 1.07 & 1.05 \\ \(C_{l}\) & \(8.2^{\circ}\) & 0.97 & 1.00 & 0.95 \\ \hline \end{tabular}
\end{table}
Table 2: Summary of the prediction in the aerodynamic lift \(C_{l}\) with the learned and baseline \(k\)–\(\omega\) models compared to the experimental data for the S809 airfoil
distribution on the surface of the S809 airfoil at \(\alpha=8.2^{\circ}\). However, at \(\alpha=14.24^{\circ}\) the baseline model underestimates the suction pressure on the upper surface of the airfoil. In contrast, the learned model with the ensemble method is able to predict \(C_{p}\) in better agreement with the experimental data for both angles of attack. The results demonstrate that the learned model can leverage integral data, i.e., lift force \(C_{l}\), to improve the prediction of wall pressure distribution.
#### 4.1.2 Physical interpretation of the model behavior
The learned model can accurately predict the aerodynamic lift and the wall pressure distribution beyond the stall angle compared to the baseline model. Such improvements can be interpreted by analyzing the model behavior in terms of the friction coefficient, flow separation, and modeled quantities. Therefore, we further provide comparisons between the learned model and the baseline model in the following.
The friction coefficient \(C_{f}\) on the airfoil is investigated to interpret the reduction of \(C_{l}\) with the learned model at \(\alpha=14.24^{\circ}\). Figure 4 shows \(C_{f}\) from the baseline and the learned models. The learned model leads to the early onset of flow separation, while the baseline model delays the separation. The early separation would lead to an enlarged re-circulation region. Therefore, the early occurrence of the flow separation is responsible for the modification of \(C_{p}\) on the suction side as shown in Fig. 3. Also, it is observed that the friction coefficient is reduced on the pressure side. That is because the flow around the airfoil changes globally with the learned model due to the upstream separation point on the upper surface. This global modification of the flow also affects the flow on the lower surface.
The enlarged re-circulation region with the learned model can be clearly seen in Figure 5, which presents streamlines around the airfoil. The baseline \(k\)-\(\omega\) model predicts a relatively small separation bubble compared to that with the learned model, which is consistent with the lift prediction listed in Table 2. The learned
Figure 3: Wall pressure coefficient \(C_{p}\) at \(\alpha=8.2^{\circ}\) and \(14.24^{\circ}\) with the learned model and the baseline \(k\)–\(\omega\) model compared to the experimental data [34] for the S809 airfoil
model produces a sufficient massive separation region, which leads to the improvement of \(C_{l}\).
We investigate the model function \(g\) to interpret the reason for the early onset of the flow separation. The learned model for the S809 airfoil is almost a linear eddy viscosity model. It can be seen from Figure 6 which presents the learned \(g\) functions on \(\theta_{1}\) at fixed planes of \(\theta_{2}/\theta_{\max}=0.25\) and \(0.75\). The magnitude of the learned \(g^{(1)}\) function decreases to around \(0.05\) and \(0.075\) at the plane of \(\theta_{2}/\theta_{\max}=0.25\) and \(0.75\), respectively, while the magnitude for the baseline model is constant at \(0.09\). In contrast, the learned \(g^{(2)}\) function is almost zero with the order of magnitude of \(10^{-6}\), which is similar to the baseline model, i.e., \(g^{(2)}=0\). Therefore, the learned model can be considered a linear eddy viscosity model, which is capable of capturing the flow separation on the S809 airfoil.
Recall that the Reynolds stress anisotropy is the linear combination of the learned function \(g(\mathbf{\theta})\) and the tensor bases \(\mathbf{T}\), i.e., \(\mathbf{b}=\sum g^{(\ell)}\mathbf{T}^{(\ell)}\). Fig. 7 presents the contour plots of the magnitude of each tensor component. The first term is a linear term on the strain rate (see Eqs. (2) and (3)). For the airfoil case, the nonlinear term is almost zero as shown in Fig. 7(b), which further confirms that the learned model can be considered a linear model under the Boussinesq assumption. Such linear models can also achieve good predictions in the lift force for the 2D airfoil case. This is consistent with the work of Singh et al. [30; 12],
Figure 4: The comparison of the friction coefficient between the learned model and the baseline model at the angle of attack of \(14.24^{\circ}\). The round circles in the right panel indicate the separation locations.
Figure 5: Predicted separation bubbles at angles of attack \(14.24^{\circ}\) with the learned model and the \(k\)–\(\omega\) model for the S809 airfoil case.
where a multiplicative correction is added in the turbulence transport equation to modify the eddy viscosity and improve the lift prediction beyond the stall angle. The linear eddy viscosity assumption is sufficient in this study of the S809 airfoil.
Since the learned model can be regarded as a linear eddy viscosity model, we further investigate the effects of the learned eddy viscosity on the model prediction. Figure 8 shows the predicted eddy viscosity with the learned model compared to the baseline \(k\)-\(\omega\) model. The eddy viscosity is computed based on the \(g^{(1)}\) function, which can be formulated as
\[\nu_{t}=-\frac{g^{(1)}k}{C_{\mu}\omega}.\]
The model constant is \(C_{\mu}=0.09\). It can be seen clearly that the eddy viscosity is reduced, particularly around the upstream boundary layer and the separated region. The eddy viscosity can transfer the energy
Figure 6: Plots of the learned mapping between the scalar invariants \(\mathbf{\theta}\) and the tensor coefficient \(\mathbf{g}\), compared to the baseline for the S809 airfoil case. For the learned model, the plots indicate the learned function at \(\theta_{2}/\theta_{\max}=0.25\) and \(0.75\).
Figure 7: Learned tensor components for the S809 airfoil case. The arrow indicates the direction of incoming flow (\(14.24^{\circ}\)).
from the outer flow to the boundary layer, which is able to restrain momentum reduction and further flow separation. The reduced eddy viscosity would weaken the energy transfer from the outer flow and enable the boundary layer to be less resistant to the adverse pressure gradient. As such, the reduced eddy viscosity in the upstream can induce a relatively early onset of the flow separation and further a large separation region. We note that the reduced eddy viscosity at the upstream boundary layer is also observed [44] from the model learned with the adjoint-based method [12], which further confirms the physical interpolation for the improved model prediction. In addition, the eddy viscosity at the lower surface is reduced as well compared to the baseline \(k\)-\(\omega\) model. This explains the friction coefficient reduction on the pressure side as presented in Fig. 4, because the wall shear stress (or the velocity gradient) is sensitive to a small change near the wall - here the eddy viscosity is slightly reduced.
The physical interpretation can empower the learned model with good predictive ability. Figure 9 presents the predicted lift force at different angles of attack from \(1^{\circ}\) to \(18^{\circ}\). The results show that the learned model can be well generalized to other flow conditions at different angles of attack. It can be seen that the baseline model has significant discrepancies in the lift coefficients for angles of attacks larger than around \(7.5^{\circ}\). In contrast, the learned model improves the prediction on \(C_{l}\) across the angle \(\alpha\). The reason for the improved prediction in other angles is due to the appropriate estimation of flow separation, as shown at the top of Figure 9. Specifically, at the small angle of attack, e.g., \(\alpha=1^{\circ}\), the baseline and learned models produce similar attached flow around the airfoil, and hence both predict the lift force in good agreement with the experiment. However, at the large angle of attack \(\alpha=11^{\circ}\), the baseline model still expects attached flow, which leads to the overestimation in \(C_{l}\) compared to the experimental data. In contrast, the learned model captures the flow separation near the trailing edge, which reduces the lift force and provides a good agreement with the lift force data. Additionally, at \(\alpha=18^{\circ}\), the baseline model predicts the flow separation but still underestimates the separation bubble size, which leads to the lift force being larger than the experimental measurement. The learned model leads to the early onset of the flow separation and provides
Figure 8: Comparison of the eddy viscosity between the learned and baseline \(k\)–\(\omega\) models for the S809 airfoil. Note that the eddy viscosity is normalized by the molecular viscosity as \(\nu_{t}/\nu\). The arrow indicates the direction of incoming flow (\(14.24^{\circ}\)).
a larger separation bubble compared to the baseline model, thereby improving the lift force prediction. The predictive performance of the learned model at additional angles \(\alpha=11^{\circ}\) and \(18^{\circ}\) can be found in Appendix B. Further generalization tests with different geometries are beyond the scope of the present work and will be conducted in the near future. We note that the adjoint-based learning method has been used in the S809 airfoil, demonstrating that the learned model can be generalized well for different configurations such as the S805 and S814 airfoils [12]. Here we use the ensemble Kalman method which is comparable to the adjoint method in model learning as demonstrated in Ref. [10]. Hence the current learned model could be generalized to other cases as the adjoint method does.
The S809 airfoil case has been used in various works [12; 7; 44; 11], including Singh et al. (2017) [12], where the learned model suppresses the turbulent production at the upstream boundary layer, leading to early flow separation. The difference between the present work and previous studies mainly lies in two aspects. First, the current modeling framework and the training method are different from the previous works. Specifically, the nonlinear eddy viscosity model is used in this study since it is flexible to capture both separated and secondary flows, while previous studies including Singh et al. (2017) [12] use a linear eddy viscosity model which is not able to predict secondary flows. Moreover, the ensemble Kalman method is used to train the neural networks in this work, while previous studies [12; 7; 44] often use the adjoint-based method for model inference. Second, the case in this work highlights the consistency of the learned model behaviors independent of the training techniques, in addition to the discussion on the capability of the data-driven method in prediction improvement as was only done in previous studies. That is, different
Figure 9: Tests on various angles of attack with comparison among the baseline model, the learned model, and the experiment [34] for the S809 airfoil case. The training cases are also indicated in the plot.
model representations and training methods lead to similar predictive improvements and model behaviors.
### Flow in a square duct
#### 4.2.1 Training performance
The ensemble Kalman method can learn a neural network-based turbulence model with improved velocity prediction for the square duct case. It can be seen from Figure 10 where the vector and contour plots of velocities are presented. The vector plots are presented in the first column of Figure 10, where the isolines indicate the levels of \(u_{y}=0.5,1.0\) and \(1.2\). It shows that the baseline model cannot predict the in-plane secondary flow, while the ensemble-based learned method can estimate the in-plane velocity vectors in a similar pattern as the DNS data. The contour plots of \(u_{x}\) and \(u_{y}\) are presented in the last two columns of Figure 10. The plots of \(u_{z}\) are omitted for brevity since it is symmetric to the vertical velocity \(u_{y}\). The axial velocity \(u_{x}\) with the baseline model and the learned model both have good agreement with the DNS data. The vertical velocity \(u_{y}\) is not captured at all by the baseline model, while the learned model can capture similar patterns as the DNS results.
Figure 10: Velocity \(u_{x}\) and \(u_{y}\) predicted from the learned models (center row) and baseline model (bottom row), compared with the ground truth (top row), for the square duct case. The velocity vectors are plotted along with contours of the streamwise velocity \(u_{x}\).
The learned model improves the prediction of the in-plane velocity by capturing the Reynolds stress imbalance and Reynolds shear stress. It is supported by Figure 11 where the Reynolds stress components and the imbalance of Reynolds normal stress are presented. The in-plane velocity is driven by the Reynolds stress imbalance \(\tau_{yy}-\tau_{zz}\) and the Reynolds shear stress \(\tau_{yz}\) based on the axial vorticity transport equation [36]:
\[u_{y}\frac{\partial\omega_{x}}{\partial y}+u_{z}\frac{\partial\omega_{x}}{ \partial z}-\nu\nabla^{2}\omega_{x}+\frac{\partial^{2}}{\partial y\partial z} \left(\mathbf{\tau}_{zz}-\mathbf{\tau}_{yy}\right)+\left(\frac{\partial^{2}}{\partial y \partial y}-\frac{\partial^{2}}{\partial z\partial z}\right)\mathbf{\tau}_{yz}=0. \tag{9}\]
For this reason, capturing the in-plane velocity requires well estimating Reynolds normal stress imbalance \(\tau_{yy}-\tau_{zz}\) and Reynolds shear stress \(\tau_{yz}\). From Figure 11, the learned model shows significant improvements in the prediction of \(\tau_{yz}\) and \(\tau_{yy}-\tau_{zz}\) compared to the baseline. Although the model still has discrepancies with the DNS data near the duct center, the noticeable improvement in \(\tau_{yz}\) and \(\tau_{yy}-\tau_{zz}\) allow us to obtain good agreement to the DNS data in the in-plane velocity prediction. Specifically, the baseline model estimates almost zero for both the Reynolds shear stress \(\tau_{yz}\) and the imbalance of Reynolds normal stresses \(\tau_{yy}-\tau_{zz}\) in the entire computational domain. In contrast, the learned model can predict them similarly obtained in the DNS, which significantly improves the in-plane velocity as shown in Fig. 10. For the Reynolds normal stresses \(\tau_{xx}\) and \(\tau_{yy}\) and the Reynolds shear stress \(\tau_{xy}\), the baseline model and the learned model give similar predictions since the in-plane velocity can not guide the training in these Reynolds stress components.
The profiles of velocity and the Reynolds stress at \(y/h=0.25,0.5,0.75,1\) are provided in Fig. 12. It can be seen that the streamwise velocity \(u_{x}\) is similar between the baseline model and the learned model, and both can have good agreement with the DNS data. As for the in-plane velocity \(u_{y}\), the baseline model is not able to predict the in-plane velocity and provide \(u_{y}=0\) at the entire domain. In contrast, the learned model significantly improves the prediction of \(u_{y}\) in better agreement with the DNS data. The plots of the Reynolds stress show that the learned model provides better predictions in the imbalance of the Reynolds normal stress \(\tau_{yy}-\tau_{zz}\) than the baseline model. As for the Reynolds shear stress \(\tau_{yz}\), both the learned and baseline models have noticeable discrepancies from the DNS data. Also, it is observed that the learned model has larger discrepancies near the diagonal line of the computational domain compared to the baseline model, which is consistent with the plots in Fig. 11. Additional results of the velocity and the Reynolds stresses at \(y/h=0.2,0.4,0.6,0.8\) are presented in B.
#### 4.2.2 Physical interpretation of model behavior
The behavior of the learned model can be interpreted based on the learned tensor coefficients \(g\). In the secondary flow, the axial velocity \(u_{x}\) is orders of magnitude larger than the in-plane velocity. Also, only four Reynolds stress components, i.e., Reynolds shear stress \(\tau_{xy}\), \(\tau_{xz}\), \(\tau_{yz}\), and Reynolds normal stress imbalance \(\tau_{yy}-\tau_{zz}\), affect the velocity [45]. The former two components affect the axial velocity, and the latter components of \(\tau_{yz}\) and \(\tau_{yy}-\tau_{zz}\) affect in-plane velocity. It can be further derived [45] that only the
coefficient \(g^{(1)}\) and the combination \(g^{(2)}-0.5g^{(3)}+0.5g^{(4)}\) can be learned with velocity data in the scenario of only first four tensor bases. Moreover, there is only one independent scalar invariant since \(\theta_{1}\approx-\theta_{2}\)[2]. Therefore, we investigate the functional mapping from the scalar invariant \(\theta_{1}\) to the coefficient \(g^{(1)}\) and the combination \(g^{(2)}-0.5g^{(3)}+0.5g^{(4)}\).
The coefficient \(g^{(1)}\) and the combination of \(g^{(2-4)}\) are shown in Figure 13. The learned function of \(g^{(1)}\) can be seen from Figure 13(a). Note that the coefficient \(g^{(1)}\) is equivalent to the \(-C_{\mu}\) of the \(k\)-\(\varepsilon\) model. The difference lies in that the coefficient has dependencies on local scalar invariants in this work rather than a constant, i.e., \(-0.09\). The \(g^{(1)}\) function with the learned model varies slightly from \(-0.87\) to \(-0.78\). For the small scalar invariants that are located around the duct center, the magnitude of the \(g^{(1)}\) function is less than \(0.08\). As the scalar invariant increases, the magnitude increases to around \(0.087\), which is slightly less than the baseline value (i.e., \(0.09\)). The baseline model provides the combination of \(g^{(2-4)}\) at almost zero, which cannot capture the in-plane velocity. In contrast, the learned model increases the magnitude of the
Figure 11: Reynolds normal stresses \(\tau_{yy}\) and \(\tau_{zz}\), Reynolds shear stresses \(\tau_{yz}\), and imbalance of Reynolds normal stresses \(\tau_{yy}-\tau_{zz}\) predicted from the learned model (center row) and the baseline model (bottom row), compared with the ground truth DNS (top row), for the square duct case.
combination at the range of nearly \([0.0025,0.01]\). This leads to nonlinear functional mappings between the Reynolds stress and the strain rate. Such nonlinear models capture the Reynolds shear stress \(\tau_{yz}\) and the Reynolds normal stress imbalance \(\tau_{yy}-\tau_{zz}\), which further improve the prediction of the in-plane velocity.
The ensemble-based model-consistent training is flexible to provide interpretable models based on sparse observations. It can be supported by the results of the tensor components as shown in Figure 14. It shows that the linear tensor component \(g^{(1)}\mathbf{T}^{(1)}\) from the learned model is larger than other nonlinear components, i.e., \(g^{(2)}\mathbf{T}^{(2)}\), \(g^{(3)}\mathbf{T}^{(3)}\), and \(g^{(4)}\mathbf{T}^{(4)}\), but at similar magnitudes. This is in contrast to the S809 airfoil case, where the linear tensor component is larger than the nonlinear components by several orders of magnitude, as shown in Fig. 7. The relatively large magnitude of the nonlinear tensors in this case is due to the secondary
Figure 12: Prediction of velocity and Reynolds stress along profiles at \(y/H=0.25,0.5,0.75,1\) with comparison among the learned model, the baseline model, and the experimental data, for the square duct case
flow characteristics that are driven by the imbalance of the Reynolds normal stress. The linear tensor \(g^{(1)}\mathbf{T}^{(1)}\) cannot capture the anisotropy of the Reynolds stress, and the nonlinear components play dominant roles in predicting in-plane velocities. Hence, for the square duct case, the ensemble-based training leads to a nonlinear model with considerable magnitude for the nonlinear terms.
In general, the ensemble-based method provides an interpretable turbulence model with appropriate nonlinearity according to limited observation data. For instance, in the scenario of separated flows over airfoils, the optimization of linear eddy viscosity is able to remedy the deficiency on the adverse pressure gradient as shown in Figure 7. In contrast, for the secondary flow, the nonlinear terms are required to accurately estimate the imbalance of the Reynolds normal stress as shown in Figure 14, which is the driving force for the spanwise vorticity. We emphasize that the available observations are often sparse in practical applications, e.g., lift force and sparse velocity measurements, as used in this work. Such severe ill-posedness poses challenges to the training method in learning dominant physical mechanisms with various flow characteristics. Hence, the flexibility of ensemble-based training is demonstrated in discovering interpretable models from sparse data.
The square duct case has been used in Zhang et al. (2022) [10], which is a proof of concept for the ensemble-based learning method. In contrast, the current study aims to demonstrate the flexibility of the ensemble method in capturing separated and secondary flows by adjusting the nonlinearity of the turbulence model. Specifically, it is observed here that the ensemble method can learn a linear eddy viscosity model for the separated flow and a nonlinear eddy viscosity model for the secondary flow. This is different from the previous work [10], which is a proof of concept for the ensemble-based learning method. Moreover, here we
Figure 13: Comparison of the model function \(g^{(1)}\) and the combination \(g^{(2)}-0.5g^{(3)}+0.5g^{(4)}\) between the learned and the baseline models for the square duct case
use sparse DNS data to train neural network models, which shows the capability of the ensemble Kalman method to handle sparse data in realistic applications. In contrast, in the previous work [10], full-field flow data approximated with the quadratic model of Shih (1993) [38] is used as synthetic truth, which is not identical to the DNS data.
## 5 Conclusions
This work investigates the physical interpretation of neural-network-based turbulence modeling with the ensemble Kalman method. The observation data, including aerodynamic lift and velocity measurements, are used to train the turbulence model represented with a tensor-basis neural network. The method is applied to the flow around the S809 airfoil and the flow in a square duct. Both cases show that the learned model significantly improves the flow predictions, and the model improvement can be interpreted from a physical viewpoint. In the S809 airfoil, the learned model reduces the eddy viscosity around the upstream boundary layer and captures the appropriate onset of the flow separation, which improves the prediction of the lift force compared to the baseline \(k\)-\(\omega\) model. The learned model can be well generalized to different angles of attack. In the square duct case, the learned model produces a nonlinear eddy viscosity model, which captures the imbalance of the Reynolds normal stress and the in-plane velocity. The ensemble Kalman method can provide appropriate turbulence models based on limited observation data. For the flow over the S809 airfoil, the training method provides an optimized linear eddy viscosity model based on the lift force measurements, which is able to capture the flow separation. In contrast, for the flow in a square duct, the training method provides a nonlinear eddy viscosity model to estimate the anisotropy of Reynolds stress and capture the in-plane secondary flows.
## Appendix A Practical implementation
The practical implementation of the ensemble-based turbulence modeling framework is detailed in this subsection. Given the observation error \(\mathsf{R}\), the data set \(\mathsf{y}\), and the sample variance \(\sigma\), the training procedure
Figure 14: Learned tensor components for the square duct case
is summarized briefly below.
1. Pre-training: To obtain the initial weight \(\mathbf{w}^{0}\) of the neural network, we pre-train the network to be equivalent to a linear eddy viscosity model such that \(g^{(1)}=-0.09\) and \(g^{(2-10)}=0\). The obtained weights \(\mathbf{w}^{0}\) are set as the initial value for model training [2].
2. Initial sampling: We assume that the weights are independent and identically distributed (i.i.d.) Gaussian random variables with mean \(\mathbf{w}^{0}\) and variance \(\sigma^{2}\). As such, we draw random samples of the weights through the formula \(\mathbf{w}_{j}=\mathbf{w}^{0}+\mathbf{\epsilon}_{j}\), where \(\mathbf{\epsilon}\sim\mathcal{N}(0,\sigma^{2})\).
3. Feature extraction: The velocity field \(\mathbf{u}\) and turbulence time scale \(\tau_{s}\) are used to compute the scalar invariants \(\mathbf{\theta}\) and the tensor bases \(\mathbf{T}\) based on the equations (3) and (6). The scalar invariants are normalized and then adopted as the inputs of the neural network. Further, the tensor bases are employed to construct the Reynolds stress by combining with the outputs of the neural network as illustrated in step 4.
4. Evaluation of Reynolds stress: The input features \(\mathbf{\theta}\) are propagated to the basis coefficient \(\mathbf{g}\) with each realization of the weights \(\mathbf{w}\). Then the Reynolds stress can be constructed by combining the coefficient \(g\) and the tensor basis \(\mathbf{T}\) based on Eq. (2).
5. Propagation to mean flow fields: The mean velocity is obtained by solving the RANS equations for each constructed Reynolds stress. Moreover, the turbulence kinetic energy and the dissipation rate are obtained by solving the turbulence transport equations.
6. Update weights of neural networks: The iterative ensemble Kalman method is used to update the weights of the neural network based on Eq. (7). In the scenario of multiple observations, e.g., the S809 airfoil case in this work, the data sets are randomly shuffled and then incorporated sequentially. Specifically, In the S809 airfoil case, the data from two different flow conditions are shuffled to generate a data set with random ordering. Then the observation data is incorporated sequentially in the shuffled order. The observation is reshuffled once the entire data sets are traversed. Besides, for each data, the Kalman update is iterated in an inner loop, and the maximum of the iteration step is set as 3 based on our sensitivity study. The random data ordering can escape from local minima [29] that provide good predictions for one case but inferior results for other cases based on our numerical tests.
If the ensemble variance is smaller than the observation error or the total iteration maximum is reached, the training is considered converged; otherwise, continue to Step 3 until the convergence criterion is met.
## Appendix B Predictive performance for unseen data
We show additional prediction results of the learned model on unseen data for both the S809 airfoil case and the square duct case in this section.
In the S809 airfoil case, the lift force measurements from angles of attack of \(8^{\circ}\) and \(14^{\circ}\) are used for training, which can improve predictions of aerodynamic lift at unseen angles of attack. Here we show that wall pressure prediction can be also improved with the learned model at unseen angles of attack. The wall pressure predictions with the learned model in angles of attack of \(11^{\circ}\) and \(18^{\circ}\) are shown in Figure 15, with comparison to the experimental data and prediction of the baseline \(k\)-\(\omega\) model. It can be seen that the baseline model underestimates the surface pressure on the suction side of the airfoil, which leads to large discrepancies in the predicted aerodynamic lift as presented in Fig. 9. In contrast, the learned model significantly improves the prediction of the wall pressure distribution and eventually the lift coefficient at both the angles, compared to the baseline \(k\)-\(\omega\) model.
In the square duct case, the velocity along profiles of \(y/h=0.25,0.5,0.75\), and \(1.0\) are used for training and lead to local predictive improvement in both the velocity and the Reynolds stress. Here we provide the model prediction at four unseen locations, i.e., \(y/h=0.2,0.4,0.6\), and \(0.8\). The results are shown in Fig. 16, with a comparison to the DNS data and the baseline \(k\)-\(\varepsilon\) model. Apparently, at these unobserved locations, the learned model predicts well the velocity component \(u_{y}\) and the difference of the imbalance of Reynolds normal stress \(\tau_{yy}-\tau_{zz}\). The learned model also yields a non-zero shear component \(\tau_{yz}\), while the baseline \(k\)-\(\varepsilon\) model yields zero shear. The latter is qualitatively incorrect based on the DNS data.
Figure 15: Prediction of wall pressure coefficient \(C_{p}\) at \(\alpha=11^{\circ}\) and \(18^{\circ}\) with the learned model and the baseline \(k\)-\(\omega\) model compared to the experimental data [34] for the S809 airfoil case
## Acknowledgment
XLZ and GH are supported by the NSFC Basic Science Center Program for "Multiscale Problems in Nonlinear Mechanics" (No. 11988102). XLZ also acknowledges support from the National Natural Science Foundation of China (No. 12102435) and the China Postdoctoral Science Foundation (No. 2021M690154). HX acknowledges the support from the National Research Foundation of Korea (No. NRF-2021H1D3A2A01096296) during his sabbatical visit to Gwangju Institute of Science and Technology, where this work was performed.
Figure B.16: Prediction of velocity and Reynolds stress along profiles at \(y/H=0.2,0.4,0.6,0.8\) with comparison among the learned model, the baseline model, and the experimental data, for the square duct case |
2304.05440 | PixelRNN: In-pixel Recurrent Neural Networks for End-to-end-optimized
Perception with Neural Sensors | Conventional image sensors digitize high-resolution images at fast frame
rates, producing a large amount of data that needs to be transmitted off the
sensor for further processing. This is challenging for perception systems
operating on edge devices, because communication is power inefficient and
induces latency. Fueled by innovations in stacked image sensor fabrication,
emerging sensor-processors offer programmability and minimal processing
capabilities directly on the sensor. We exploit these capabilities by
developing an efficient recurrent neural network architecture, PixelRNN, that
encodes spatio-temporal features on the sensor using purely binary operations.
PixelRNN reduces the amount of data to be transmitted off the sensor by a
factor of 64x compared to conventional systems while offering competitive
accuracy for hand gesture recognition and lip reading tasks. We experimentally
validate PixelRNN using a prototype implementation on the SCAMP-5
sensor-processor platform. | Haley M. So, Laurie Bose, Piotr Dudek, Gordon Wetzstein | 2023-04-11T18:16:47Z | http://arxiv.org/abs/2304.05440v1 | PixelRNN: In-pixel Recurrent Neural Networks for End-to-end-optimized Perception with Neural Sensors
###### Abstract
Conventional image sensors digitize high-resolution images at fast frame rates, producing a large amount of data that needs to be transmitted off the sensor for further processing. This is challenging for perception systems operating on edge devices, because communication is power inefficient and induces latency. Fueled by innovations in stacked image sensor fabrication, emerging sensor-processors offer programmability and minimal processing capabilities directly on the sensor. We exploit these capabilities by developing an efficient recurrent neural network architecture, PixelRNN, that encodes spatio-temporal features on the sensor using purely binary operations. PixelRNN reduces the amount of data to be transmitted off the sensor by a factor of \(64\times\) compared to conventional systems while offering competitive accuracy for hand gesture recognition and lip reading tasks. We experimentally validate PixelRNN using a prototype implementation on the SCAMP-5 sensor-processor platform.
## 1 Introduction
Increasingly, cameras on edge devices are being used for enabling computer vision perception tasks rather than for capturing images that look beautiful to the human eye. Applications include various tasks in virtual and augmented reality displays, wearable computing systems, drones, robotics, and the internet of things, among many others. For such edge devices, low-power operation is crucial, making it challenging to deploy large neural network architectures which traditionally leverage modern graphics processing units for inference.
A plethora of approaches have been developed in the "TinyML" community to address these challenges. Broadly speaking, these efforts focus on developing smaller [25] or more efficient network architectures, often by pruning or quantizing larger models [10]. Platforms like TensorFlow Lite Micro enable application developers to deploy their models directly to power-efficient microcontrollers which process data closer to the sensor. Specialized artificial intelligence (AI) accelerators, such as Movidius' Myriad vision processing unit, further reduce the power consumption. While these approaches can optimize the processing component of a perception system, they do not reduce the large amount of digitized sensor data that needs to be transmitted to the processor in the first place, via power-hungry interfaces such as MIPI-CSI, and stored in the memory. This omission is highly significant as data transmission and memory access are among the biggest power sinks in imaging systems [20]. This raises the question of how to design perception systems where sensing, data communication, and processing components are optimized end to end.
Efficient perception systems could be designed such that important task-specific image and video features are encoded directly on the imaging sensor using in-pixel processing, resulting in the sensor's output being significantly reduced to only these sparse features. This form of in-pixel feature encoding mechanism could significantly reduce the required bandwidth, thus reducing power consumption of data communication, memory management, and downstream processing. Event sensors [19] and emerging focal-plane sensor-processors [53] are promising hardware platforms for such perception systems because they can directly extract either temporal information or spatial features, respectively, on the sensor. These features can be transmitted off the sensor using low-power parallel communication interfaces supporting low bandwidths.
Our work is motivated by the limitations of existing feature extraction methods demonstrated on these emerging sensor platforms. Rather than extracting simple temporal gradients [19] or spatial-only features via convolutional neural networks (CNNs) [7, 6], we propose in-pixel recurrent neural networks (RNNs) that efficiently extract spatio-temporal features on sensor-processors for bandwidth-efficient perception systems. RNNs are state-of-the-art network architectures for processing sequences, such as video in computer vision tasks [31]. Inspired by the emerging paradigm of neural sensors [39], our in-pixel RNN frame
work, dubbed PixelRNN, comprises a light-weight in-pixel spatio-temporal feature encoder. This in-pixel network is jointly optimized with a task-specific downstream network. We demonstrate that our architecture outperforms event sensors and CNN-based sensor-processors on perception tasks, including hand gesture recognition and lip reading, while drastically reducing the required bandwidth compared to any traditional sensor based approaches. Moreover, we demonstrate that PixelRNN offers better performance and lower memory requirements than larger RNN architectures in the low-precision settings of in-pixel processing.
Our work's contributions include
* the design and implementation of in-pixel recurrent neural networks for sensor-processors, enabling bandwidth-efficient perception on edge devices;
* the demonstration that our on-sensor spatio-temporal feature encoding maintains high performance while significantly reducing sensor-to-processor communication bandwidth with several tasks, including hand gesture recognition and lip reading;
* the experimental demonstration of the benefits of in-pixel RNNs using a prototype implementation on the SCAMP-5 sensor-processor.
## 2 Related Work
Performing feature extraction on power and memory constrained computing systems requires the union of multiple fields: machine learning, specialized hardware, and network compression techniques.
Machine Learning on the Edge.Edge computing devices are often subject to severe power and memory constraints, leading to various avenues of research and development. On the hardware side, approaches include custom application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), or other energy efficient AI accelerators. However, this does not address the issue of data transmission from imaging sensors, which is one of the main sources of power consumption [20]. To circumvent the memory constraints, network compression techniques are introduced. They fall into roughly five categories [10]: 1. parameter reduction by pruning redundancy [4, 46, 26, 57]; 2. low-rank parameter factorization [15, 28, 49]; 3. carefully designing structured convolutional filters [16, 48, 52]; 4. creating smaller models [22, 1, 8]; 5. parameter quantization [13, 42, 27, 50, 35, 58, 30]. In this work, we move compute directly onto the sensor and also apply ideas and techniques mentioned above.
Beyond Frame-based Sensing.Event-based cameras have been gaining popularity [19] as the readout is asynchronous and often sparse, triggered by pixel value changes above a certain threshold. However, these sensors are not programmable and do data compression with a simple fixed function. Another emerging class of sensors include focal plane sensor-processors, also known as pixel processor arrays. Along with supporting traditional sensing capabilities, these sensors have a processing element embedded into each pixel. While conventional vision systems have separate hardware for sensing and computing, sensor-processors perform both tasks "in pixel," enabling efficient, low-latency and low-power computation. Recently, sensor-processors with some programmability have emerged [9, 40, 37, 43, 55, 18]. Further advances in 3D fabrication techniques, including wafer-level hybrid bonding and stacked CMOS image sensors, set the stage for rapid development of increasingly more capable programmable sensors.
In-pixel Perception.In the past few years, there has been a surge of advances in neural networks for vision tasks as well as an increasing desire to perform tasks on constrained mobile and wearable computing systems. Sensor-processors are a natural fit for such systems as they can perform sophisticated visual computational tasks at a significantly lower power than traditional hardware. Some early chips [36, 44] were based on implementing convolution kernels in a recurrent dynamical "Cellular Neural Network" model [12, 45]. In 2019, Bose et al. created "A Camera that CNNs" - one of the first works to implement a deep convolutional neural network on the sensor [6]. Since then, there have been a number of other works in CNNs on programmable sensors [47, 14, 51, 21, 7, 32, 34, 33]. These works extract features in the spatial domain, but miss a huge opportunity in failing exploit temporal information. Purely CNN based approaches do not utilize or capitalize on the temporal redundancy or information of the sequence of frames. Our work introduces light-weight extraction of spatio-temporal features, better utilizing the structure of the visual data, all while maintaining low bandwidth and high accuracy.
## 3 In-pixel Recurrent Neural Networks
Emerging sensor-processors with in-pixel processing enable the development of end-to-end-optimized on-sensor and downstream networks off-sensor. In this section, we describe a new on-sensor recurrent spatio-temporal feature encoder that significantly improves upon existing temporal- or spatial-only feature encoders for video processing, as shown in the next section. The proposed pipeline is illustrated in Figure 1.
### In-Pixel CNN-based Feature Encoder
Convolutional neural networks are among the most common network architectures in computer vision. They are
written as
\[\text{CNN}\;\left(\mathbf{x}\right) =\left(\phi_{n-1}\circ\phi_{n-2}\circ\ldots\circ\phi_{0}\right) \left(\mathbf{x}\right),\] \[\phi_{i} =\mathbf{x}_{i} \mapsto\psi\left(\mathbf{w}_{i}*\mathbf{x}_{i}+\mathbf{b}_{i} \right), \tag{1}\]
where \(\mathbf{w}_{i}*\mathbf{x}_{i}:\mathbb{N}^{N_{i}\times M_{i}\times C_{i}} \mapsto\mathbb{N}^{N_{i+1}\times M_{i+1}\times C_{i+1}}\) describes the multi-channel convolution of CNN layer \(i\) and \(\mathbf{b}_{i}\) is a vector containing bias values. Here, the input image \(\mathbf{x}_{i}\) has \(C_{i}\) channels and a resolution of \(N_{i}\times M_{i}\) pixel and the output of layer \(i\) is further processed by the nonlinear activation function \(\psi\).
The SCAMP-5 system used in this work lacks native multiplication operations at the pixel level. Due to this limitation, works storing network weights \(\mathbf{w}_{i}\) in pixel typically restrict themselves to using binary, \(\{-1,1\}\), or ternary \(\{-1,0,1\}\) values. This reduces all multiplications to sums or differences, which are highly efficient native operations.
### In-pixel Spatio-temporal Feature Encoding
Recurrent neural networks (RNNs) are state-of-the-art network architectures for video processing. Whereas a CNN only considers each image in isolation, an RNN extracts spatio-temporal features to process video sequences more effectively. Network architectures for sensor-processors must satisfy two key criteria. First, they should be small and use low-precision weights. Second, they should comprise largely of local operations as the processors embedded within each pixel can only communicate with their direct neighbors (e.g., [9]).
To satisfy these unique constraints, we devise an RNN architecture that combines ideas from convolutional gated recurrent units (GRUs) [2] and minimal gated units [56]. The resulting simple, yet effective PixelRNN architecture, is written as
\[\mathbf{f}_{t} =\psi_{f}\left(\mathbf{w}_{f}*\text{CNN}\left(\mathbf{x}_{t} \right)+\mathbf{u}_{f}*\mathbf{h}_{t-1}\right), \tag{2}\] \[\mathbf{h}_{t} =\mathbf{f}_{t}\odot\mathbf{h}_{t-1},\] (3) \[\mathbf{o}_{t} =\psi_{o}\left(\mathbf{w}_{o}*\text{CNN}\left(\mathbf{x}_{t} \right)+\mathbf{u}_{o}*\mathbf{h}_{t-1}\right), \tag{4}\]
where \(\mathbf{w}_{f}\), \(\mathbf{u}_{f}\), \(\mathbf{w}_{o}\), \(\mathbf{u}_{o}\) are small convolution kernels and \(\psi_{f}\) is either the sign (when working with binary constraints) or the sigmoid function (when working with full precision). We include an optional nonlinear activation function \(\psi_{o}\) and an output layer \(\mathbf{o}_{t}\) representing the values that are actually transmitted off sensor to the downstream network running on a separate processor. For low-bandwidth operation, the output layer \(o\) is only computed, and values transmitted off the sensor, every \(K\) frames. The output layer can optionally be omitted, in which case the hidden state \(\mathbf{h}_{t}\) is streamed off the sensor every \(K\) frames.
PixelRNN uses what is commonly known as a "forget gate", \(\mathbf{f}_{t}\), and a hidden state \(\mathbf{h}_{t}\), which are updated at each time step \(t\) from the input \(\mathbf{x}_{t}\). RNNs use forget gates to implement a "memory" mechanism that discards redundant spatial features over time. PixelRNN's forget gate is also motivated by this intuition, but our update mechanism in Eq. 3 is tailored to working with binary constraints using values \(\{-1,1\}\). In this case, Eq. 3 flips the sign of \(\mathbf{f}_{t}\) in an element-wise manner rather than decaying over time. This mechanism works very well in practice when \(\mathbf{h}_{t}\) is re-initialized to all ones every 16 time steps.
PixelRNN's architecture includes alternative spatial- or temporal-only feature extractors as special cases. For example, it is intuitive to see that it models a conventional CNN by omitting the recurrent units. We specifically write out the output gate in our definition to make it intuitive how PixelRNN also approximates a difference camera as a special case, which effectively implement a temporal-only fea
Figure 1: The perception pipeline of PixelRNN can be broken down into an on-sensor encoder and a task-specific decoder. On the left is the camera equipped with a sensor–processor, which offers processing and memory at the pixel level. The captured light is directly processed by a CNN that extracts spatial features, which are further processed by a convolutional recurrent neural network with built-in memory and temporal feature extraction. Here we show our PixelRNN variant on the right, \(\star\) being the convolution operator, \(\odot\) element-wise multiplication, and \(\psi\) a nonlinear function. Instead of sending out full \(256\times 256\) values at every time step, our encoder compresses by \(64\times\). While we show this pipeline for a lip reading task, the decoder can be designed for any perception task.
ture encoder. In this case, \(\mathbf{h}_{t}=\mathbf{x}_{t},\mathbf{w}_{o}=1,\mathbf{u}_{o}=-1\) and \(\psi_{o}\left(x\right)=\begin{array}{cc}-1&\text{for x}<-\delta\\ 0&\text{for }-\delta\leq\text{x}\leq\delta\\ 1&\text{for }\delta<\text{x}\end{array},\) for some threshold \(\delta\). The image formation model of event cameras [19] is asynchronous and a difference camera represents only a crude approximation, but it serves as a pedagogically useful temporal-only encoder in the context of this discussion.
### Learning Quantized In-pixel Parameters
PixelRNN uses binary weights to reduce all multiplications to sums. To learn these parameters efficiently, we parameterize each of these values \(w\) using a continuous value \(\tilde{w}\) and a quantization function \(q\) such that
\[w=q\left(\tilde{w}\right),\quad q:\mathbb{R}\rightarrow\mathcal{Q}. \tag{5}\]
Here, \(q\) maps a continuous value to the closest discrete value in the feasible set \(\mathcal{Q}\), i.e., \(\{-1,1\}\).
One can employ surrogate gradient methods [3, 54], continuous relaxation of categorical variables using Gumbel-Softmax [29, 38], or other approaches to approximately differentiate through \(q\). For the binary weights we use, \(w=q(\tilde{w})=\text{sign}(\tilde{w})\), and we found approximating the gradient of the sign function with the derivative of \(\text{tanh}(mx)\) produced very good results:
\[\frac{\partial q}{\partial\tilde{w}}\approx m\cdot(1-\tanh^{2}(m\tilde{w})) \tag{6}\]
where \(m>0\) controls the steepness of the \(\tanh\) function, which is used as a differentiable proxy for \(q\approx\tanh(m\tilde{w})\) in the backward pass. The larger \(m\) is, the more it resembles the sign function and the more the gradient resembles the delta function.
### Implementation Details
We implement our per-frame CNN feature encoder, testing both 1 or 2 layers and process images at a resolution of \(64\times 64\) pixels by downsampling the raw sensor images before feeding them into the CNN. In all experiments, our PixelRNN transmits data off the sensor every 16 frames. Thus, we achieve a reduction in bandwidth by a factor of 64\(\times\) compared to the raw data. In all of our experiments, we set the function \(\psi_{o}\) to the identity function.
Additional implementation details are found in the supplement and source code will be made public for enhanced reproducibility.
## 4 Experiments
Evaluating Feature Encoders.As discussed in the previous section, RNNs require a CNN-based feature encoder as part of their architecture. In-pixel CNNs have been described in prior work [7, 32, 34], albeit not in the context of video processing with RNNs.
Table 1 summarizes the simulation performance of re-implementations of various CNN architectures on image classification using the MNIST, CIFAR-10 dataset, and hand gesture recognition from individual images.
Bose et al. [6, 7] describes a 2-layer CNN with ternary weights. The two works have the same architecture but differ drastically in the sensor-processor implementation. Liu et al. [32, 34] describes 1- and 3-layer CNNs with binary weights using a different architecture than Bose while using similar sensor-processor implementations concepts. Our feature encoder is a binary 2-layer variant of Bose et al.'s CNN. Each layer has 16 kernels of size \(5\times 5\) and are followed with a non-linearity and maxpooling of \(4\times 4\). The 16 \(16\times 16\) channels are then concatenated into a single \(64\times 64\) image size to serve as the input to the next convolutional layer or to the PixelRNN. All of these CNNs are roughly on-par with some performing better at some tasks than others. Ours strikes a good balance between accuracy and model size. We do not claim this CNN to be a contribution of our work, but include this brief comparison for completeness.
Baseline Architectures.We use several baselines for our analysis. The RAW camera mode simply outputs the input frame at every time step and represents the naive imaging approach. The simulated difference camera represents a simple temporal-only feature encoder. We also include several RNN architectures, including long short-term memory (LSTM), gated recurrent unit (GRU), minimal gated unit (MGU), a simple RNN (SRNN), and our PixelRNN. Moreover, we evaluated each of the RNN architectures using 1-layer and 2-layer CNN feature encoders. The output of all RNNs is read from the sensor only once every 16 time steps. All baselines represent options for in-pixel processing and their respective output is streamed off the sensor. In all cases, a single fully-connected network layer processes this output on a downstream processor to compute the final clas
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Model Name & MNIST & CIFAR-10 & Hand & \# Model & Size \\ & & & Gesture & Params & (MB) \\ \hline Bose [7, 6] & 95.0\% & 39.8\% & 43.4\% & 257 & 0.05 \\ Liu 2020 [32] & 80.0\% & 32.5\% & 57.4\% & 258 & 0.03 \\ Liu 2022 [34] & **95.1\%** & 32.6\% & 60.2\% & \(2,374\) & 0.30 \\ Our CNN & 90.9\% & **43.1\%** & **68.1\%** & 802 & 0.10 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Comparing CNN Feature Encoders. We simulated different in-pixel CNNs using image classification on MNIST, CIFAR-10, and the Cambridge hand gesture recognition based on different implementations. All CNN architectures perform roughly on par with our binary 2-layer CNN encoder striking a good balance between accuracy and model size. The model size is computed by multiplying the number of model parameters by the quantization of the values.**
sification scores. This fully-connected layer is trained end to end for each of the baselines. While this fully-connected layer is simple, it is effective and could be customized for a specific task.
Additional details on these baselines, including formulas and training details, are listed in the supplement. Table 2 shows an overview of these, listing the number of model parameters and the readout bandwidth for each of them.
Datasets.For the hand gesture recognition task, we use the Cambridge Hand Gesture Recognition dataset. This dataset consists of 900 video clips of 9 gesture classes; each class contains 100 videos. For the lip reading task, we use the Tulips1 dataset. This dataset is a small Audiovisual database of 12 subjects saying the first 4 digits in English; it was introduced in [41].
Accuracy vs. Memory.In Figure S1, we evaluate the accuracy of several baseline architectures on two tasks: hand gesture recognition (left) and lip reading (right). We compare the baselines described above, each with 1- and 2-layer CNN encoders and binary or full 32-bit floating point precision. For the full-precision networks, PixelRNN achieves an accuracy comparable to the best models, but it provides one of the lowest memory footprints. Comparing the networks with binary weights, PixelRNN also offers the best accuracy with a memory footprint comparable to the next best method. Surprisingly, larger architectures, such as GRUs and LSTMs, do not perform well when used with binary weights. This can be explained by the increasing difficulty of reliably training increasingly large networks with binary parameter constraints. Leaner networks, such as SRNN and PixelRNN, can be trained more robustly and reliably in these settings. Note that the memory plotted on the x-axis represents all intermediate features, per pixel, that need to be stored during a single forward pass through the RNN. We do not count the network parameters in this plot, because they do not dominate the memory requirements and can be shared among the pixels.
Constraints of the Experimental Platform.Our hardware platform, SCAMP-5, provides an equivalent to 32 bytes of memory per pixel for storing intermediate features. This limit is illustrated as dashed vertical lines in Figure S1, indicating that only low-precision RNN networks are feasible on this platform. The available memory is computed as follows. SCAMP-5 has 6 analog registers per pixel, each we
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multicolumn{2}{c}{\multirow{2}{*}{Model Name}} & \# Model & \multicolumn{1}{c}{Readout} \\ & & Parameters & Bandwidth \\ \hline \multirow{3}{*}{
\begin{tabular}{c} **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S** **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S**lass **S** **S**lass **S**lass **S**lass **S**lass **S** **S**lass **S**lass **S**lass **S**lass **S** **S**lass **S**lass **S**lass **S**lass **S** **S**lass **S**lass **S** **S**lass **S**lass **S** **S**lass **S** **S**lass **S** **S**lass **S** **S**lass **S** **S**lass **S** **S**lass **S** **S**lass **S** **S**lass **S** **S**lass **S** **S**lass **S** **S** **S**lass **S** **S** **S** **S** **S**lass **S** **S** **S** **S** **S**S** **S** **S** **S** **S**S** **S**S** **S** **S** **S**S** **S** **S**S** **S**S** **S**S** **S**S** **S**S** **S**S** **S**S** **S**S** **S**S** **S**S** **S**S** **S**S** **S**S** **S**S** **S**S** **S**S**S** **S**S**S** **S**S**S** **S**S**S**S** **S**S**S**S**S **S**S**S**S**S**S **
assume is equivalent to 1 byte: 2 registers store the model weights, 2 are reserved for computation, leaving 2 bytes per pixels for the intermediate features. SCAMP-5 consists of an array of \(256\times 256\) pixel processors, however our approach operates on a smaller effective image size of \(64\times 64\). This allows us to consider a single "pixel" to comprise of a block of \(4\times 4\) pixel elements, increasing the effective bytes per pixel to 32 bytes.
Accuracy vs. Bandwidth.We select a readout bandwidth of 4,096 values (every 16 frames) based on the available bandwidth of our hardware platform, the SCAMP-5 sensor. In Figure 3 we evaluate the effect of further reducing this bandwidth on the accuracy for the PixelRNN architecture. Bandwidth is controlled using a max-pooling layer operating at differing sizes from \(1\times 1\) through \(8\times 8\) and then at multiples of 8 up to \(64\times 64\) before inputting the intensity images to PixelRNN. The resulting output bandwidths range between 1 to 4096. We ran each experiment ten times and the best performances of each are plotted for hand gesture recognition and lip reading. We observe that the bandwidth could be further reduced to about 1,000 values every 16 frames without significantly degrading the accuracy on these datasets. However, decreasing the bandwidth beyond this also reduces the accuracy.
## 5 Experimental Prototype
### Pixel-Level Programmable Sensors
SCAMP-5 [9] is one of the emerging programmable sensors representative of the class of focal-plane sensor-processors (FPSP). Unlike conventional image sensors, each of the \(256\times 256\) pixels is equipped with an arithmetic logic unit, 6 local analog and 13 digital memory registers, control and I/O circuitry, and access to certain registers of the four neighboring pixels. SCAMP-5 operates in single-instruction multiple-data (SIMD) mode with capabilities to address individual pixels, patterns of pixels, or the whole array. It features mixed-mode operation execution that allows for low-power compute prior to A/D conversion. Most importantly, SCAMP-5 is programmable.
### Implementation of PixelRNN on SCAMP-5
The pipeline for our prototype feature extractor is shown in Figure 4. Because of the memory architecture on SCAMP-5, performing multiple convolutions and different updates of gates and states require us to split the focal plane into 16 parallel processors with a smaller effective image size. The input image is binarized, downsampled to \(64\times 64\), and duplicated out to a \(4\times 4\) grid of parallel processor elements (PE) of size \(64\times 64\). Each PE performs a convolution with a \(5\times 5\) kernel, yielding 16 feature maps per convolutional layer. The 16 \(64\times 64\) features undergo a ReLU activation, maxpooling and binarization to 16 \(16\times 16\). These are then concatenated to create a single \(64\times 64\) input to the RNN or another convolutional layer if desired. This process makes use of the image transformation methods for SCAMP-5 introduced by [5]. Our RNN uses both the output of the CNN and the hidden state to update the hidden state and compute an output every 16 time steps. The RNN gates are calculated via convolution and element-wise multiplication. To suit the SCAMP-5 architecture, we limited operations to addition, XOR, and negation, and trained a binary version of PixelRNN, binarizing weights and features to -1 and 1. Instead of multiplications, we now just need addition and subtraction.
Memory Allocation.SCAMP-5's analog and digital registers are limited in number and present different challenges. Analog registers cannot hold values for long periods before decaying. The decay is exacerbated if one moves information from pixel to pixel such as in shifting an image. We found using analog registers with a routine to refresh their content to a set of quantized values inspired by [7] helped circumvent some of the challenges. This allowed the storage of binary weights for convolutions and the hidden state for prolonged periods of time. The remaining memory registers were used for performing computations and storing intermediate feature maps.
Convolution Operation.A single pixel cannot hold all weights of a single kernel, so the weights are spread across a single analog register plane as shown in Figure 4. To perform a convolution, SCAMP-5 iterates through all 25
Figure 3: **Bandwidth Analysis. We can control the bandwidth of data read off the sensor using increasingly larger max-pooling layers before inputting the intensity images to PixelRNN at the cost of decreased accuracy.**
weights in the \(5\times 5\) kernel, each time multiplying it with the whole image and adding to a running sum. The image is then shifted, the next weight fills the register plane, and the process continues until the feature is computed. We include a detailed diagram in the supplement and more information can be found in [7].
RNN Operation.Figure 5 shows the layout of the sequence of operation in the RNN. Each pixel contains 6 analog registers, named A,B,C,D,E,and F. We refer to register plane A as all the A registers across the entire image sensor. In Figure 5, the \(256\times 256\) pixels are split into a \(4\times 4\) larger processor elements of size \(64\times 64\). In register plane A, we take the output from the CNN and the previous hidden state and duplicate it to two other PEs in plane A. Register plane B holds the corresponding weights \(\textbf{w}_{f}\), \(\textbf{u}_{f}\), \(\textbf{w}_{o}\), \(\textbf{u}_{o}\) for the convolution operators needed. 4 convolutions are simultaneously run on one register plane. The outputs in plane B are shifted and added. A binarization is applied to get \(\textbf{f}_{t}\). This is then used to update a hidden state via element-wise multiplication every time step. Every 16 time
Figure 4: This pipeline shows the sequence of operations from left to right. The input image is downsampled, duplicated, and binarized. Stored convolutional weights perform 16 convolutions, to produce 16 feature maps in the \(4\times 4\) grid of processor elements. A ReLU activation is applied, followed by max-pooling, downsampling, and binarization. This can either be fed to another CNN layer or to the input of the RNN. The RNN takes in the output of the CNN and the previous hidden state \(\textbf{h}_{t-1}\). The hidden state \(\textbf{h}_{t}\) is updated every timestep. The output \(\textbf{o}_{t}\) is read out every 16 frames, yielding 64\(\times\) decrease in bandwidth.
Figure 5: To implement the PixelRNN on SCAMP-5, the image plane is split into a \(4\times 4\) grid of processor elements shown above. Two analog register planes are used, Register planes A and B. Above, we show the sequence of operations from left to right. The input from the CNN and the previous hidden state are duplicated in A. These 4 PEs are convolved \(*\) with the corresponding gate weights stored in plane B. The resulting convolutions in the second column are then added to compute the output \(\textbf{o}_{t}\) and the forget gate \(\textbf{f}_{t}\). Note that an in-place binarization is applied to \(\textbf{f}_{t}\). The hidden state \(\textbf{h}_{t}\) is updated via an element-wise multiplication \(\odot\) of \(\textbf{h}_{t-1}\) and \(\textbf{f}_{t}\).
steps, SCAMP-5 outputs the \(64\times 64\) image corresponding to the output gate \(\mathbf{o}_{t}\). Our spatio-temporal feature encoder distills the salient information while giving a 64\(\times\) decrease in bandwidth.
Accounting for Analog Uncertainty.As with all analog compute, a certain amount of noise should be expected. As such, we treat each of the SCAMP's analog registers to contain values split across equally spaced discrete intervals. During convolutions, the binary image and binary weights are XNOR-ed. Depending on the result, we either add or subtract an analog value approximately equal to \(10\). As the analog registers have an approximate range of values \(-128\) to \(127\), the interval cannot be increased without risking saturation during convolutions. However, there is uncertainty when it comes to the precision and uniformity of the intervals. Along with decay, this uncertainty, spatial non-uniformity, and noise affect the operations that follow. In the RNN, these effects accumulate for 16 frames, leading to a significant amount noise. To account for these effects, we trained binary models in simulation with varying amounts of added Gaussian noise in the CNN and the RNN prior to quantization of the features. We also fine-tuned the off-sensor layer on the training set outputs from SCAMP-5.
Assessment.To test our prototype, we uploaded the video datasets to SCAMP-5 in sequence and saved out the outputs every 16 frames (see supplement for additional details). In our current implementation, it takes roughly 95 ms to run a single frame through the on-sensor encoder. The \(64\times 64\) output region then goes through the off-sensor linear layer decoder. We evaluate the performance using the models trained with and without noise. The results shown in Table 3 highlight the benefits of training with noise, as well as the difficulty that comes with working with analog registers. We see even running the same train set through SCAMP-5 two separate times does not result in the same performance. Without the noise-trained model, we reached 61.1% on the hand gesture recognition test set. Performance improved to 73.3% when we used the weights from training with noise. Similarly, the performance on lip reading was boosted to 70.0% when using a model trained on noise. While added noise during training helps, the noise characteristics of SCAMP-5 are much more complex. Such issues may be mitigated in future sensor-processors with sufficient digital registers to avoid having to rely upon the use of analog computation. While limited by noise, we demonstrated the first in-pixel spatio-temporal encoder for bandwidth compression.
## 6 Discussion
In the traditional computer vision pipeline, full-frame images are extracted from the camera and are fed into machine learning models for different perception tasks. While completely viable in systems not limited by compute, memory, or power, many edge devices do not offer this luxury. For systems like AR/VR devices, robotics, and wearables, low-power operation is crucial, and even more so if the system requires multiple cameras. The community has already been working on creating smaller, more efficient models, as well as specialized accelerators. However, the communication between camera and processor that consumes nearly 25% of power in these systems [20] has not been optimized. In this work, we demonstrate how running a simple in-pixel spatio-temporal feature extractor can decrease the bandwidth, and hence power associated with readout, by 64\(\times\). Even with highly quantized weights and signals and a very simple decoder, we still maintain good performance on hand gesture recognition and lip reading. We studied different RNN architectures and presented PixelRNN that performs well in highly quantized settings, we studied just how small we could make the bandwidth before affecting performance, shown in Figure 3, and implemented a physical prototype with one of the emerging sensors, SCAMP-5, that is paving the way for future sensors.
Limitations and Future Work.One of the biggest challenges of working with SCAMP5 is accounting for the analog noise, but the platform offers great flexibility to program the data movement between pixels to implement prototypes. While SCAMP-5 offers many exciting capabilities, it is still limited in memory and compute as all circuitry needs to fit in a single pixel. Until recently, adding circuitry or memory to image sensors compromised the fill factor, which worsens the imaging performance and limits achievable image
\begin{table}
\begin{tabular}{c c c c} \hline \hline & Train Set & Train Set & Test Set \\ & Accuracy run 1 & Accuracy run 2 & Accuracy \\ \hline \hline
**Hand Gesture Recognition** & & & \\ Noise-free Model & 100.0\% & 64.0\% & 61.1\% \\ Model trained with noise & 95.3\% & 70.8\% & 73.3\% \\ \hline
**Lip Reading** & & & \\ Noise-free Model & 100.0\% & 78.9\% & 50.0\% \\ Model trained with noise & 98.5\% & 84.85\% & 70.0\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Experimental Results.** We run the training sets through the SCAMP-5 implementation twice. The first outputs are used to fine-tune the off-sensor linear layer decoder. In theory, the train set accuracy of the second runs should be close. With the noise accumulated through the analog compute, however, the SCAMP-5 implementation is not deterministic. Adding Gaussian noise during training increases the test-set performance.
resolution. With the new developments in stacked CMOS image sensors, future sensors will be able to host much more compute and memory on the sensor plane, allowing us to design more expressive models and to apply tools like architecture search to optimize where compute happens in a system [17]. Until then, we are limited to light-weight in-pixel models. Noise related to analog compute can also be mitigated by switching over to digital compute. Our work is not only applicable to SCAMP-5 but to all future focal-plane processors.
Conclusion.Emerging image sensors offer programability and compute directly in the pixels. Our work is the first to demonstrate how to capitalize on these capabilities using efficient RNN architectures, decreasing the bandwidth of data that needs to be read off the sensor as well as stored and processed by downstream application processors by a factor of 64\(\times\). We believe our work paves the way for other inference tasks of future artificial intelligence-driven sensor-processors.
## Acknowledgements
This project was in part supported by the National Science Foundation and Samsung.
|
2307.13014 | Graph Neural Networks For Mapping Variables Between Programs -- Extended
Version | Automated program analysis is a pivotal research domain in many areas of
Computer Science -- Formal Methods and Artificial Intelligence, in particular.
Due to the undecidability of the problem of program equivalence, comparing two
programs is highly challenging. Typically, in order to compare two programs, a
relation between both programs' sets of variables is required. Thus, mapping
variables between two programs is useful for a panoply of tasks such as program
equivalence, program analysis, program repair, and clone detection. In this
work, we propose using graph neural networks (GNNs) to map the set of variables
between two programs based on both programs' abstract syntax trees (ASTs). To
demonstrate the strength of variable mappings, we present three use-cases of
these mappings on the task of program repair to fix well-studied and recurrent
bugs among novice programmers in introductory programming assignments (IPAs).
Experimental results on a dataset of 4166 pairs of incorrect/correct programs
show that our approach correctly maps 83% of the evaluation dataset. Moreover,
our experiments show that the current state-of-the-art on program repair,
greatly dependent on the programs' structure, can only repair about 72% of the
incorrect programs. In contrast, our approach, which is solely based on
variable mappings, can repair around 88.5%. | Pedro Orvalho, Jelle Piepenbrock, Mikoláš Janota, Vasco Manquinho | 2023-07-24T16:14:32Z | http://arxiv.org/abs/2307.13014v2 | # Graph Neural Networks For Mapping Variables Between Programs - Extended Version
###### Abstract
Automated program analysis is a pivotal research domain in many areas of Computer Science -- Formal Methods and Artificial Intelligence, in particular. Due to the undecidability of the problem of program equivalence, comparing two programs is highly challenging. Typically, in order to compare two programs, a relation between both programs' sets of variables is required. Thus, mapping variables between two programs is useful for a panopoly of tasks such as program equivalence, program analysis, program repair, and clone detection. In this work, we propose using graph neural networks (GNNs) to map the set of variables between two programs based on both programs' abstract syntax trees (ASTs). To demonstrate the strength of variable mappings, we present three use-cases of these mappings on the task of _program repair_ to fix well-studied and recurrent bugs among novice programmers in introductory programming assignments (IPAS). Experimental results on a dataset of 4166 pairs of incorrect/correct programs show that our approach correctly maps 83% of the evaluation dataset. Moreover, our experiments show that the current state-of-the-art on program repair, greatly dependent on the programs' structure, can only repair about 72% of the incorrect programs. In contrast, our approach, which is solely based on variable mappings, can repair around 88.5%.
## 1 Introduction
The problem of program equivalence, i.e., deciding if two programs are equivalent, is undecidable [33, 6]. On that account, the problem of repairing an incorrect program based on a correct implementation is very challenging. In order to compare both programs, i.e., the correct and the faulty implementation, program repair tools first need to find a relation between both programs' sets of variables. Besides _program repair_[1], the task of mapping variables between programs is also important for _program analysis_[41], _program equivalence_[8], _program clustering_[27, 40], _program synthesis_[30], _clone detection_[15], and _plagiarism detection_[34].
Due to a large number of student enrollments every year in programming courses, providing feedback to novice students in _introductory programming assignments_ (IPAS) requires substantial time and effort by the faculty [42]. Hence, there is an increasing need for systems capable of providing automated, comprehensive, and personalized feedback to students in programming assignments [12, 10, 11, 1]. _Semantic program repair_ has become crucial to provide feedback to each novice programmer by checking their IPAS submissions using a pre-defined test suite. Semantic program repair frameworks use a correct implementation, provided by the lecturer or submitted by a previously enrolled student, to repair a new incorrect student's submission. However, the current state-of-the-art tools on semantic program repair [10, 1] for IPAS have two main drawbacks: (1) require a perfect match between the control flow graphs (loops, functions) of both programs, the correct and the incorrect one; and (2) require a bijective relation between both programs' sets of variables. Hence, if one of these requirements is not satisfied, then, these tools cannot fix the incorrect program with the correct one.
For example, consider the two programs presented in Figure 1. These programs are students' submissions for the IPA of printing all the natural numbers from \(1\) to a given number \(n\). The program in Listing 1 is a semantically correct implementation that uses a for-loop to iterate all the natural numbers until \(n\). The program in Listing 2 uses a while-loop and an auxiliary function. This program is semantically incorrect since the student forgot to initialize the variable \(j\), a frequent bug among novice programmers called _missing expression/assignment_[36]. However, in this case, state-of-the-art program repair tools [10, 1] cannot fix the buggy program, since the control flow graphs do not match either due to using different loops (for-loop vs. while-loop) or due to the use of an auxiliary function. Thus, these program repair tools cannot leverage on the correct implementation in Listing 1 to repair the faulty program in Listing 2.
To overcome these limitations, in this paper, we propose a novel graph program representation based on the structural information of the _abstract syntax trees_ (ASTs) of imperative programs to learn how to map the set of variables between two programs using _graph neural networks_ (GNNs). Additionally, we present use-cases of program repair where these variable mappings can be applied to repair common bugs in incorrect students' programs that previous tools are not always capable of handling. For example, consider again the two programs presented in Figure 1. Note that having a mapping between both programs' variables (e.g. [n:1,i:j]) lets us reason about, on the level of expressions, which program fixes one can perform on the
faulty program in Listing 2. In this case, when comparing variable i with variable j one would find the _missing assignment_ i.e., j = 1.
Another useful application for mapping variables between different programs is fault localization. There is a body of research on fault localization [16; 21; 22; 23], that requires the usage of assertions in order to verify programs. Variable mappings can be helpful in sharing these assertions among different programs. Additionally, several program repair techniques (e.g., SearchRepair[18], Clara[10]) enumerate all possible mappings between two programs' variables during the search for possible fixes, using a correct program [10] or code snippets from a database [18]. Thus, variable mappings can drastically reduce the search space, by pruning all the other solutions that use a different mapping.
In programming courses, unlike in production code, typically, there is a reference implementation for each programming exercise. This comes with the challenge of comparing different names and structures between the reference implementation and a student's program. To deal with this challenging task, we propose to map variables between programs using GNNs. Therefore, we explore three tasks to illustrate the advantages of using variable mappings to repair some frequent bugs without considering the incorrect/correct programs' control flow graphs. Hence, we propose to use our variable mappings to fix bugs of: _wrong comparison operator_, _variable misuse_, and _missing expression_. These bugs are recurrent among novice programmers [36] and have been studied by prior work in the field of automated program repair [3; 31; 38; 4].
Experiments on 4166 pairs of incorrect/correct programs show that our GNN model correctly maps 83% of the evaluation dataset. Furthermore, we also show that previous approaches can only repair about 72% of the dataset, mainly due to control flow mismatches. On the other hand, our approach, solely based on variable mappings, can fix 88.5%.
The main contributions of this work are:
* A novel graph program representation that is agnostic to the names of the variables and for each variable in the program contains a representative variable node that is connected to all the variable's occurrences;
* We propose to use GNNs for mapping variables between programs based on our program representation, ignoring the variables' identifiers;
* Our GNN model and the dataset used for this work's training and evaluation, will be made open-source and publicly available on GitHub: [https://github.com/pmorvalho/ecai23-GNNs-for-mapping-variables-between-programs](https://github.com/pmorvalho/ecai23-GNNs-for-mapping-variables-between-programs).
The structure of the remainder of this paper is as follows. First, Section 2 presents our graph program representations. Next, Section 3 describes the GNNs used in this work. Section 4 introduces typical program repair tasks, as well as our program repair approach using variable mappings. Section 5 presents the experimental evaluation where we show the effectiveness of using GNNs to produce correct variable mappings between programs. Additionally, we compare our program repair approach based on the variable mappings generated by the GNN with state-of-the-art program repair tools. Finally, Section 6 describes related work, and the paper concludes in Section 7.
## 2 Program Representations
We represent programs as directed graphs so the information can propagate in both directions in the GNN. These graphs are based on the programs' _abstract syntax trees_ (ASTs). An AST is described by a set of nodes that correspond to non-terminal symbols in the programming language's grammar and a set of tokens that correspond to terminal symbols [14]. An AST depicts a program's grammatical structure [2]. Figure 1(a) shows the AST for the small code snippet presented in Listing 3.
Regarding our graph program representation, firstly, we create a unique node in the AST for each distinct variable in the program and connect all the variable occurrences in the program to the same unique node. Figure 1(b) shows our graph representation for the small code snippet presented in Listing 3. Observe that our representation uses a single node for each variable in the program, the green nodes a and b. Moreover, we consider five types of edges in our representation: child, sibling, read, write, and chronological edges. _Child edges_ correspond to the typical edges in the AST representation that connect each parent node to its children. Child edges are bidirectional in our representation. In Figure 1(b), the black edges correspond to child edges. _Sibling edges_ connect each child to its sibling successor. These edges denote the order of the arguments for a given node and have been used in other program representations [3]. Sibling edges allow the program representation to differentiate between different arguments when the order of the arguments
Figure 1: Two implementations for the IPA of printing all the natural numbers from 1 to a given number \(n\). The program in Listing 2 is semantically incorrect since the variable j, which is the variable being used to iterate over all the natural numbers until the number l, is not being initialized, i.e., the program has a bug of _missing expression_. The mapping between these programs’ sets of variables is [n:l;i:j].
is important (e.g. binary operation such as \(\leq\)). For example, consider the node that corresponds to the operation \(\sigma(A_{1},A_{2},\ldots,A_{m})\). The parent node \(\sigma\) is connected to each one of its children by a child edge e.g. \(\sigma\leftrightarrow A_{1},\sigma\leftrightarrow A_{2},\ldots,\sigma \leftrightarrow A_{m}\). Additionally, each child its connected to its successor by a sibling edge e.g. \(A_{1}\to A_{2},A_{2}\to A_{3},\ldots,A_{m-1}\to A_{m}\). In Figure 1(b), the red dashed edges correspond to sibling edges.
Regarding the _write and read edges_, these edges connect the ID nodes with the unique nodes corresponding to some variable. Write edges are connections between an ID node and its variable node. This edge indicates that the variable is being written. Read edges are also connections between an ID node and its variable node, although these edges indicate that the variable is being read. In Figure 1(b), the blue dashed edge corresponds to a write edge while the green dashed edges correspond to read edges. Lastly, _chronological edges_ establish an order between all the ID nodes connected to some variable. These edges denote the order of the ID nodes for a given variable node. For example, in Figure 1(b), the yellow dashed edge corresponds to a chronological edge between the ID nodes of the variable \(\mathtt{a}\). Besides the siblings and the chronological edges, all the other edges are bidirectional in our representation.
_The novelty of our graph representation_ is that we create a unique variable node for each variable in the program and connect each variable's occurrence to its unique node. This lets us map two variables in two programs, even if their number of occurrences is different in each program. Furthermore, the variable's identifier is suppressed after we connect all the variable's occurrences to its unique node. This way, all the variables' identifiers are anonymized. Prior work on representing programs as graphs [3; 38; 4] use different nodes for each variable occurrence and take into consideration the variable identifier in the program representation. Furthermore, to the best of our knowledge, combining all five types of edges (sibling, write, read, chronological, and AST) is also novel. Section 5.3 presents an ablation study on the set of edges to analyze the impact of each type of edge.
## 3 Graph Neural Networks (GNNs)
Graph Neural Networks (GNNs) are a subclass of neural networks designed to operate on graph-structured data [20], which may be citation networks [7], mathematical logic [9] or representations of computer code [3]. Here, we use graph representations of a pair of ASTs, representing two programs for which we want to match variables, as the input. The main operative mechanism is to perform _message passing_ between the nodes, so that information about the global problem can be passed between the local constituents. The content of these messages and the final representation of the nodes is parameterized by neural network operations (matrix multiplications composed with a non-linear function). For the variable matching task, we do the following to train the parameters of the network. After several message passing rounds through the edges defined by the program representations above, we obtain numerical vectors corresponding to each variable node in the two programs. We compute scalar products between each possible combination of variable nodes in the two programs, followed by a softmax function. Since the program samples are obtained by program mutation, the correct mapping of variables is known. Hence, we can compute a cross-entropy loss and minimize it so that the network output corresponds to the labeled variable matching. Note that the network has no information on the name of any object, which means that the task must be solved purely based on the structure of the graph representation. Therefore, our method is invariant to the consistent renaming of variables.
Architecture Details.The specific GNN architecture used in this work is the relational graph convolutional neural network (RGCN), which can handle multiple edges or relation types within one graph [35]. The numerical representation of nodes in the graph is updated in the message passing step according to the following equation:
\[\mathbf{x}^{\prime}_{i}=\mathbf{\Theta}_{\text{not}}\cdot\mathbf{x}_{i}+ \sum_{r\in\mathcal{R}}\sum_{j\in\mathcal{N}_{r}(i)}\frac{1}{|\mathcal{N}_{r}( i)|}\mathbf{\Theta}_{r}\cdot\mathbf{x}_{j},\]
where \(\mathbf{\Theta}\) are the trainable parameters, \(\mathcal{R}\) stands for the different edge types that occur in the graph, and \(\mathcal{N}_{r}\) the neighbouring nodes of the current node \(i\) that are connected with the edge type \(r\)[32]. After each step, we apply Layer Normalization [5] followed by a Rectified Linear Unit (ReLU) non-linear function.
We use two separate sets of parameters for the message passing phase for the program with the bug and the correct program. Five
Figure 2: AST and our graph representation for the small code snippet presented in Listing 3.
message passing steps are used in this work. After the message passing phase, we obtain numerical vectors representing every node in both graphs. We then calculate dot products \(\vec{a}\cdot\vec{b}\) between the vectors representing variable nodes in the buggy program graph \(a\in A\) and the variable nodes from the correct graph \(b\in B\), where \(A\) and \(B\) are the sets of variable node vectors. A score matrix \(\mathcal{S}\) with dimensions \(|A|\times|B|\) is obtained, to which we apply the softmax function on each row to obtain the matrix \(\mathcal{P}\). The values in each row of \(\mathcal{P}\) can now be interpreted as representing the probability that variable \(a_{i}\) maps to each of the variables \(b_{i}\).
## 4 Use-Cases: Program Repair
In this section, we propose a few use-cases on how to use variable mappings for program repair. More specifically, to repair bugs of: _wrong comparison operator_, _variable misuse_, and _missing expression_. These bugs are common among novice programmers [36] and have been studied by prior work in the field of automated program repair [3, 31, 38, 4]. The current state-of-the-art on semantic program repair tools focused on repairing IPAs, such as Clara[10] and Verifix[1], are only able to fix these bugs if the correct expression in the correct program is located in a similar program structure as the incorrect expression in the incorrect implementation. For example, consider again the two programs presented in Figure 1. If the loop condition was incorrect in the faulty program, Clara and Verifix could not fix it, since the control flow graphs do not match. Thus, these tools would fail due to _structural mismatch_.
The following sections present three program repair tasks that take advantage of variable mappings to repair an incorrect program using a correct implementation for the same IPA without considering the programs' structures. Our main goal is to show the usefulness of variable mappings. We claim that variable mappings are informative enough to repair these three realistic types of bugs. Given a buggy program, we search for and try to repair all three types of bugs. Whenever we find a possible fix, we check if the program is correct using the IPA's test suite.
Bug #1: Wrong Comparison Operator (WCO).Our first use-case are faulty programs with the bug of wrong comparison operator (WCO). This is a recurrent bug in students' submissions to IPAs since novice programmers frequently use the wrong operator, e.g., \(\mathrm{i}<=\mathrm{n}\) instead of \(\mathrm{i}<\mathrm{n}\).
We propose tackling this problem solely based on the variable mapping between the faulty and correct programs, ignoring the programs' structure. First, we rename all the variables in the incorrect program based on the variable mapping by changing all the variables' identifiers in the incorrect program with the corresponding variables' identifiers in the correct implementation. Second, we count the number of times each comparison operation appears with a specific pair of variables/expressions in each program. Then, for each comparison operation in the correct program, we compute the mirrored expression, i.e., swapping the operator by its mirrored operator, and swapping the left-side and right-side of the operation. This way, if the incorrect program has the same correct mirrored expression, we can match it with an expression in the correct program. For example, in the programs shown in Figure 1, both loop conditions would match even if they are mirrored expressions, i.e., \(\mathrm{i}<=\mathrm{n}\) and \(\mathrm{n}>=\mathrm{i}\).
Afterwards, we iterate over all the pairs of variables/expressions that appear in comparison operations of the correct program (plus the mirrored expressions) and compare if the same pair of variables/expressions appear the same number of times in the incorrect program, using the same comparison operator. If this is not the case, we try to fix the program using the correct implementation's operator in each operation of the incorrect program with the same pair of variables/expressions. Once the program is fixed, we rename all the variables based on the reverse variable mapping.
Bug #2: Variable Misuse (VM).Our second program repair task are buggy programs with variables being misused, i.e., the student uses the wrong variable in some program location. The wrong variable is of the same type as the correct variable that should be used. Hence, this bug does not produce any compilation errors. This type of bug is common among students and experienced programmers [17, 37]. The task of detecting this specific bug has received much attention from the Machine Learning (ML) research community [3, 38, 43].
Once again, we propose to tackle this problem based on the variable mapping between the faulty program and the correct one, ignoring the programs' structure. We start by renaming all the variables in the incorrect program based on the variable mapping. Then we count the number of times each variable appears in both programs. If a variable, \(\mathrm{x}\), appears more times in the incorrect program than in the correct implementation, and if another variable \(\mathrm{y}\) appears more times in the correct program, then we try to replace each occurrence of \(\mathrm{x}\) in the incorrect program with \(\mathrm{y}\). Once the program is fixed, we rename all the variables based on the reverse variable mapping.
Bug #3: Missing Expression (ME).The last use-case we will focus on is to repair the bug of _missing expressions/assignments_. This bug is also recurrent in students' implementations of IPAs [36]. Frequently, students forget to initialize some variable or to increment a variable of some loop, resulting in a bug of missing expression. However, unlike the previously mentioned bugs, this one has not received much attention from the ML community since it is more complex to repair this program fault. To search for a possible fix, we start by renaming all the variables in the incorrect program based on the variable mapping. Next, we count the number of times each expression appears in both programs. Expressions that appear more frequently in the correct implementation are considered possible repairs. Then, we try to inject these expressions, one at a time, into the incorrect implementation's code blocks and check the program's correctness. Once the program is fixed, we rename all the variables based on the reverse variable mapping. This task is solely based on the variable mapping between the faulty and the correct programs.
## 5 Experiments
Experimental Setup.We trained the Graph Neural Networks on an Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz server with 72 CPUs and 692GB RAM. Networks were trained using NVIDIA GEFORCE GTX 1080 graphics cards with 12GB of memory. All the experiments related to our program repair tasks were conducted on an Intel(R) Xeon(R) Silver computer with 4210R CPUs @ 2.40GHz, using a memory limit of 64GB and a timeout of 60 seconds.
### IPAs Dataset
To evaluate our work, we used C-Pack-IPAs[26], a benchmark of student programs developed during an introductory programming course in the C programming language for ten different IPAs, over two distinct academic years, at Instituto Superior Tecnico. These
IPAs are small imperative programs that deal with integers and input-output operations (see Appendix B).
First, we selected a set of correct submissions, i.e., programs that compiled without any error and satisfied a set of input-output test cases for each IPA. We gathered 238 correct students' submissions from the first year and 78 submissions from the second year. We used the students' submissions from the first year for training and for validating our GNN and the submissions from the second year for evaluating our work.
Since we need to know the real variable mappings between programs (ground truth) to evaluate our representation, we generated a dataset of pairs of correct/incorrect programs to train and evaluate our work with specific bugs. This is a common procedure to evaluate machine learning models in the field of program repair [3, 38, 4, 43, 29]. To generate this dataset, we used MultIPAs [28], a program modifier capable of mutating C programs syntactically, generating semantically equivalent programs, i.e., changing the program's structure but keeping its semantics. There are several program mutations available in MultIPAs: mirroring comparison expressions, swapping the if's then-block with the else-block and negating the test condition, increment/decrement operators mirroring, variable declarations reordering, translating for-loops into equivalent while-loops, and all possible combinations of these program mutations. Hence, MultIPAs has thirty-one different configurations for mutating a program. All these program mutations generate semantically equivalent programs. Afterwards, we also used MultIPAs, to introduce bugs into the programs, such as _wrong comparison operator_ (WCO), _variable misuse_ (VM), _missing expression_ (ME). Hence, we gathered a dataset of pairs of programs and the mappings between their sets of variables (see Appendix A). Each pair corresponds to a real correct student's implementation, and the second program is the student's program after being mutated and with some bug introduced. Thus, this IPA dataset is generated, although based on real programs. The dataset is divided into three different sets: training set, validation set, and evaluation set. The programs generated from _first year_ submissions are divided into a training and validation set based on which students' submissions they derive from. 80% of the students supply the training data, while 20% supply validation data. The evaluation set, which is not used during the machine learning optimization, is chronologically separate: it consists only of _second year_ submissions, to simulate the real-world scenario of new, incoming students. The training set is composed of 3372, 5170, and 2908 pairs of programs from the first academic year for the WCO, VM, and ME bugs, respectively. The validation set, which was used during development to check the generalization of the prediction to unseen data, comprises 1457, 1457, and 1023 pairs of programs from the first year. Note that we subsample from the full spectrum of possible mutations, to keep the training data size small enough to train the network with reasonable time constraints. From each of the 31 combinations of mutations, we use one randomly created sample for each student per exercise. We found that this already introduced enough variation in the training dataset to generalize to unseen data. Finally, the evaluation set is composed of 4166 pairs of programs from the second year (see \(3^{rd}\) row, Table 2). This dataset will be publicly available for reproducibility reasons.
### Training
At training time, since the incorrect program is generated, the mapping between the variables of both programs is known. The network is trained by minimizing the cross entropy loss between the labels (which are categorical integer values indicating the correct mapping) and the values in each corresponding row of the matrix \(\mathcal{P}\). As an optimizer, we used the Adam algorithm with its default settings in PyTorch [19]. The batch size was 1. As there are many different programs generated by the mutation procedures, we took one sample from each mutation for each student. Each network was trained for 20 full passes (epochs) over this dataset while shuffling the order of the training data before each pass. For validation purposes, data corresponding to 20\(\%\) of the students from the first year of the dataset was kept separate and not trained on.
Table 1 shows the percentage of validation data mappings that were exactly correct (accuracy) after 20 epochs of training, using four different GNN models. Each GNN model was trained on programs with the bugs of wrong comparison operator (WCO), variable misuse (VM), missing expression (ME) or all of them (All). Furthermore, each GNN model has its own validation set with programs with a specific type of bug. The GNN model trained on All Bugs was validated using a mix of problems from each bug type. In the following sections, we focus only on this last GNN model (All Bugs).
### Evaluation
Our GNN model was trained on programs with bugs of wrong comparison operator (WCO), variable misuse (VM), and missing expression (ME). We used two evaluation metrics to evaluate the variable mappings produced by the GNN. First, we counted the number of totally correct mappings our GNN was able to generate. We consider a variable mapping totally correct if it correctly maps all the variables
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{**Buggy Programs**} \\ \cline{2-5}
**Evaluation Metric** & WCO Bug & VM Bug & ME Bug & All Bugs \\ \hline \# Correct Mappings & 87.38\% & 81.87\% & 79.95\% & 82.77\% \\ Avg Overlap Coefficient & 96.99\% & 94.28\% & 94.51\% & 95.05\% \\ \hline \# Programs & 1078 & 1936 & 1152 & 4166 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The number of correct variable mappings generated by our GNN on the evaluation dataset and the average overlap coefficients between the real mappings and our GNN’s variable mappings.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{**Buggy Programs**} \\ \cline{2-5}
**Evaluation Metric** & WCO Bug & VM Bug & ME Bug & All Bugs \\ \hline \# Correct Mappings & 87.38\% & 81.87\% & 79.95\% & 82.77\% \\ Avg Overlap Coefficient & 96.99\% & 94.28\% & 94.51\% & 95.05\% \\ \hline \# Programs & 1078 & 1936 & 1152 & 4166 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Validation mappings fully correct after 20 training epochs.
between two programs. Secondly, we computed the overlap coefficient between the original variable mappings and the variable mappings generated by our GNN. The overlap coefficient is a similarity metric given by the intersection between the two mappings divided by the length of the variable mapping (see Appendix D).
The first row in Table 2 shows the number of totally correct variable mappings computed by our GNN model. One can see that the GNN maps correctly around 83% of the evaluation dataset. We have also looked into the number of variables in the mappings we were not getting entirely correct. The results showed that programs with more variables (e.g., six or seven variables) are the most difficult for our GNN to map their variables correctly (see Appendix C). For this reason, we have also computed the overlap coefficient between the GNN's variables mappings and the original mappings (ground truth). The second row in Table 2 shows the average of the overlap coefficients between the original variable mappings and the mappings generated by our GNN model. The overlap coefficient [39] measures the intersection (overlap) between two mappings. If the coefficient is \(100\%\), both sets are equal. One set cannot be a subset of the other since both sets have the same number of variables in our case. The opposite is \(0\%\) overlap, meaning there is no intersection between the two mappings. The GNN achieved at least 94% of overlap coefficients, i.e., even if the mappings are not always fully correct, almost 94% of the variables are correctly mapped by the GNN.
Ablation Study.To study the effect of each type of edge in our program representation, we have performed an ablation study on the set of edges. Prior works have done similar ablation studies [3]. Table 3 presents the accuracy of our GNN (i.e., number of correct mappings) on the evaluation dataset after 20 epochs. We can see that the accuracy of our GNN drops from 96% to 53% if we remove the AST edges (index 0), which was expected since these edges provide syntactic information about the program. Removing the sibling edges (index 1) also causes a great impact on the GNN's performance, dropping to 74%. The other edges are also important, and if we remove them, there is a negative impact on the GNN's performance. Lastly, since the AST and sibling edges caused the greatest impact, we evaluated using only these edges on our GNN and got an accuracy of 94.7%. However, the model using all the proposed edges has the highest accuracy of 96.49%.
### Program Repair
This section presents the results of using variable mappings on the three use-cases described in Section 4, i.e., the tasks of repairing bugs of: _wrong comparison operator_ (WCO), _variable misuse_ (VM) and _missing expression_ (ME). For this evaluation, we have also used the two current publicly available program repair tools for fixing introductory programming assignments (IPAS): Clara[10] and Veri-Fix[1]. Furthermore, we have tried to fix each pair of incorrect/correct programs in the evaluation dataset by passing each one of these pairs of programs to every repair method: Veri-Fix, Clara, and our repair approach based on the GNN's variable mappings.
If our repair procedure cannot fix the incorrect program using the most likely variable mapping according to the GNN model, then it generates the next most likely mapping based on the variables' distributions computed by the GNN. Therefore, the repair method iterates over all variable mappings based on the GNN's predictions. Lastly, we have also run the repair approach using as baseline variable mappings generated based on uniform distributions. This case simulates most repair techniques that compute all possible mappings between both programs' variables (e.g., SearchRepair[18]).
Table 4 presents the number of programs repaired by each different repair method. The first row presents the results for the baseline, which was only able to fix around 50% of the evaluation dataset. In the second row, the interested reader can see that Veri-Fix can only repair about 62% of all programs. Clara, presented in the third row, outperforms Veri-Fix, being able to repair around 72% of the whole dataset. The last row presents the GNN model. This model is the best one repairing 88.5% of the dataset.
The number of executions that resulted in a timeout (60 seconds) is relatively small for Veri-Fix and Clara. Regarding our repair procedure, it either fixes the incorrect program or iterates over all variable mappings until it finds one that fixes the program. Thus, the baseline and the GNN present no failed executions and considerably high rates of executions that end up in timeouts, almost 50% for the baseline and 11.5% in the case of the GNN model. Additionally, Table 4 also presents the failure rate of each technique, i.e., all the computations that ended within 60 seconds and did not succeed in fixing the given incorrect program. Veri-Fix has the highest failure rate, around 35% of the entire evaluation set. Clara also presents a significant failure rate, about 28%. As explained previously, this is the main drawback of these tools. Hence, these results support our claim that it is possible to repair these three realistic bugs solely based on the variable mappings' information without matching the structure of the incorrect/correct programs.
Furthermore, considering all executions, the average number of variable mappings used within 60 seconds is 1.24 variable mappings for the GNN model and 5.6 variable mappings when considering the baseline. The minimum number of mappings generated by both approaches is 1, i.e., both techniques were able to fix at least one incorrect program using the first generated variable mapping. The maximum number of variable mappings generated was 32 (resp. 48) for the GNN (resp. baseline). The maximum number of variable mappings used is high because the repair procedure iterates over all the variable mappings until the program is fixed or the time runs out. Moreover, even if we would only consider using the first variable mapping generated by the GNN model to repair the incorrect programs, we would be able to fix 3377 programs in 60 seconds, corresponding to 81% of the evaluation dataset.
Regarding the time performance of each technique, Figure 3 shows a cactus plot that presents the CPU time spent, in seconds, on repairing each program (\(y\)-axis) against the number of repaired programs (\(x\)-axis) using different repairing techniques. One can clearly see a gap between the different repair methods' time performances. For example, in 10 seconds, the baseline can only repair around 1150 programs, Veri-Fix repairs around 2300, Clara repairs around 2850
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline
**Edges Used** & All & (1,2,3,4) & (0,2,3,4) & (0,1,3,4) & (0,1,2,4) & (0,1,2,3) & (0,1) \\ \hline
**Accuracy** & **96.49\%** & 52.53\% & 73.76\% & 95.45\% & 94.87\% & 96.06\% & 94.74\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Percentage of variable mappings fully correct on the validation set for different sets of edges used. Each type of edge is represented by an index using the mapping: {0: AST; 1: sibling; 2: write; 3: read; 4: chronological}.
programs while using the GNN's variable mappings, we can repair around 3350 programs, i.e., around 17% more. We are considering the time the GNN takes to generate the variable mappings and the time spent on the repair procedure. However, the time spent by the GNN to generate one variable mapping is almost insignificant. The average time the GNN takes to produce a variable mapping is 0.025 seconds. The minimum (resp. maximum) time spent by the GNN, considering all the executions is 0.015s (resp. 0.183s).
## 6 Related Work
_Automated program repair_[1, 24, 10, 12, 42] has become crucial to provide feedback to novice programmers by checking their introductory programming assignments (IPAs) submissions using a test suite. In order to repair an incorrect program with a correct reference implementation, Clara[10] requires a perfect match between both programs' control flow graphs and a bijective relation between both programs' variables. Otherwise, Clara returns a structural mismatch error. Verifix[1] aligns the control flow graph (CFG) of an incorrect program with the reference solution's CFG. Then, using that alignment relation and MaxSMT solving, Verifix proposes fixes to the incorrect program. Verifix also requires a compatible control flow graph between the incorrect and the correct program. BugLab[4] is a Python program repair tool that learns how to detect and fix minor semantic bugs. To train BugLab, [4] applied four program mutations and introduced four different bugs to augment their benchmark of Python programs. DeepBug[31] uses rule-based mutations to build a dataset of programs from scratch to train its ML-based program repair tool. Given a program, this tool classifies if the program is buggy or not.
_Mapping variables_ can also be helpful for the task of _code adaption_, where the repair framework tries to adapt all the variable names in a pasted snippet of code, copied from another program or a Stack Overflow post to the surrounding preexisting code [25]. AdaptivePaste[25] focused on a similar task to _variable misuse_ (VM) repair, it uses a sequence-to-sequence with multi-decoder transformer training to learn programming language semantics to adapt variables in the pasted snippet of code. Recently, several systems were proposed to tackle the VM bug with ML models [3, 13, 41]. These tools classify the variable locations as faulty or correct and then replace the faulty ones through an enumerative prediction of each buggy location [3]. However, none of these methods takes program semantics into account, especially the long-range dependencies of variable usages [25].
## 7 Conclusions
This paper tackles the highly challenging problem of mapping variables between programs. We propose the usage of graph neural networks (GNNs) to map the set of variables between two programs using our novel graph representation that is based on both programs' abstract syntax trees. In a dataset of 4166 pairs of incorrect/correct programs, experiments show that our GNN correctly maps 83% of the evaluation dataset. Furthermore, we leverage the variable mappings to perform automatic program repair. While the current state-of-the-art on program repair can only repair about 72% of the evaluation dataset due to structural mismatch errors, our approach, based on variable mappings, is able to fix 88.5%.
In future work, we propose to integrate our variable mappings into other program repair tools to evaluate the impact of using these mappings to repair other types of bugs. Additionally, we will analyze using our mappings to fix an incorrect program using several correct programs.
## Acknowledgements
This work was supported by Portuguese national funds through FCT under projects UIDB/50021/2020, PTDC/CCI-COM/2156/2021, 2022.03537.PTDC and grant SFRH/BD/07724/2020. This work was also supported by European funds through COST Action CA2011; by the European Regional Development Fund under the Czech project AI&Reasoning no. CZ.02.1.01/0.0/0.0/15_003/0000466 (JP), Amazon Research Awards (JP), and by the Ministry of Education, Youth, and Sports within the program ERC CZ under the project POSTMAN no. LL1902. This article is part of the RICAIP project
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{3}{c}{**Buggy Programs**} & \multicolumn{3}{c}{**Not Succeeded**} \\ \cline{2-7}
**Repair Method** & WCO Bug & VM Bug & ME Bug & All Bugs & **\% Failed** & **\% Timeouts (60s)** \\ \hline
**Baseline** & 618 (57.33\%) & 1187 (61.31\%) & 287 (24.91\%) & 2092 (50.22\%) & 0 (0.0\%) & **2074 (49.78\%)** \\
**Verifix** & 555 (51.48\%) & 1292 (66.74\%) & 741 (64.32\%) & 2588 (62.12\%) & **1471 (35.31\%)** & 107 (2.57\%) \\
**Clara** & 722 (66.98\%) & 1517 (78.36\%) & 764 (66.32\%) & 3003 (72.08\%) & 1153 (27.68\%) & 10 (0.24\%) \\
**GNN** & **992 (92.02\%)** & **1714 (88.53\%)** & **981 (85.16\%)** & **3687 (88.5\%)** & 0 (0.0\%) & 479 (11.5\%) \\ \hline \hline \end{tabular}
\end{table}
Table 4: The number of programs repaired by each different repair technique: Verifix, Clara, and our repair approach based on our GNN’s variable mappings. The first row shows the results of repairing the programs using variable mappings generated based on uniform distributions (baseline).
Figure 3: Cactus plot - The time spent by each method repairing each program of the evaluation dataset, using a timeout of 60 seconds.
that has received funding from the EU's Horizon 2020 research and innovation program under grant agreement No 857306.
## Appendix A IPAs Dataset Generation
To evaluate our work, we have generated a dataset of pairs programs based on a benchmark of student programs developed during an introductory programming course in the C programming language for ten different introductory programming assignments (IPAs), over two distinct academic years. We selected only semantically correct submissions i.e., programs that compiled without any error and satisfied a set of input-output test cases for each IPA.
Afterwards, we generated a dataset of pairs of correct/incorrect programs to train and evaluate our work with specific bugs. The reason to generate programs is that we need to know the real variable mappings between two programs (ground truth) to evaluate our representation. As explained in the paper, we used MultiIPAs [28] to generate this dataset. This tool can mutate our programs syntactically, generating semantically equivalent programs. There are several program mutations available in MultiIPAs such as: mirroring comparison expressions, swapping the if's then-block with the ell-block and negating the test condition, increment/decrement operators mirroring, variable declarations reordering, translating for-loops into equivalent while-loops, and all possible combinations of these program mutations. Hence, MultiIPAs has 31 different configurations for mutating a program. Each program mutation can be applied in more than one place for a given program. Hence, each program mutation can generate several different mutated programs. For example, using the program mutation that reorders variable declarations, each possible reordering generates a different mutated program.
Regarding the generation of buggy programs, we also used MultiIPAs, for introducing bugs into the programs, such as _wrong comparison operator_ (WCO), _variable misuse_ (VM) and _missing expression_ (ME). Each bug can be applied in more than one place for a given program. Thus, one program can generate several different buggy programs using the same bug. For example, the bug of variable misuse can be applied in each variable occurrence in the program, each one generates a single buggy program.
Figure 4 presents the generation of our dataset. Firstly, we applied all the available program mutations to each correct student's submission. Then, for each mutated program, we applied all three types of bugs: WCO, VM and ME. Finally, we gathered a dataset of pairs of programs and the mappings between their sets of variables. As Figure 4 shows, each pair of programs, in our generated dataset, corresponds to a correct student's implementation and the student's program after being mutated and with some bug introduced.
## Appendix B Description of IPAs
The set of Introductory Programming Assignments (IPAs) used to train and evaluate the GNN model is part of the C-Pack-IPAs benchmark [26]. In this set of IPAs the students learn how to program with integers, floats, IO operations (mainly printf and scanf), conditionals (if-statements), and simple loops (for and while-loops).
Ipa #1.Write a program that determines and prints the largest of three integers given by the user.
Ipa #2.Write a program that reads two integers 'N, M' and prints the smallest of them in the first row and the largest in the second.
Ipa #3.Write a program that reads two positive integers 'N, M' and prints "yes" if 'M' is a divisor of 'N', otherwise prints "no".
Ipa #4.Write a program that reads three integers and prints them in order on the same line. The smallest number must appear first.
Ipa #5.Write a program that reads a positive integer 'N' and prints the numbers '1..N', one per line.
Ipa #6.Write a program that determines the largest and smallest number of 'N' real numbers given by the user. Consider that 'N' is a value requested from the user. The result must be printed with the command 'printf('min: %f, max: %ffn', min, max)'.
Ipa #7.Write a program that asks the user for a positive integer 'N' and prints the number of divisors of 'N'. Remember that prime numbers have 2 divisors.
Ipa #8.Write a program that calculates and prints the average of 'N' real numbers given by the user. The program should first ask the user for an integer 'N', representing the number of numbers to be entered. The real numbers must be represented by float type. The result must be printed with the command 'printf("%.2f", avg);'.
Ipa #9.Write a program that asks the user for a value 'N' corresponding to a certain period of time in seconds. The program should output this period of time in the format 'HH:MM:SS'.
Ipa #10.Write a program that asks the user for a positive value 'N'. The output should present the number of digits that make up 'N' (on the first line), as well as the sum of the digits of 'N' (on the second line). For example, the number 12345 has 5 digits, and the sum of these digits is 15.
## Appendix C #Correct/Incorrect Mappings vs #Variables
Figure 5 shows a histogram with the number of programs, \(y\)-axis, whose variables (number of variables in the \(x\)-axis) our GNN models can map totally correct (#Correct Mappings) in green and programs with at least one variable being mapped incorrectly (#incorrect Mappings) in red.
## Appendix D Overlap Coefficient
The overlap or Szymkiewicz-Simpson coefficient measures the overlap between two sets (e.g. mappings). This metric can be calculated by dividing the size of the intersection of two sets by the size of the smaller set, as follows:
\[overlap(A,B)=\frac{|A\cap B|}{min(|A|,|B|)} \tag{1}\]
An overlap of \(100\%\) means that both sets are equal or one of them is a subset of the other. The opposite, \(0\%\) overlap, means there is no intersection between both sets. |
2304.04010 | Non-asymptotic approximations of Gaussian neural networks via
second-order Poincaré inequalities | There is a growing interest on large-width asymptotic properties of Gaussian
neural networks (NNs), namely NNs whose weights are initialized according to
Gaussian distributions. A well-established result is that, as the width goes to
infinity, a Gaussian NN converges in distribution to a Gaussian stochastic
process, which provides an asymptotic or qualitative Gaussian approximation of
the NN. In this paper, we introduce some non-asymptotic or quantitative
Gaussian approximations of Gaussian NNs, quantifying the approximation error
with respect to some popular distances for (probability) distributions, e.g.
the $1$-Wasserstein distance, the total variation distance and the
Kolmogorov-Smirnov distance. Our results rely on the use of second-order
Gaussian Poincar\'e inequalities, which provide tight estimates of the
approximation error, with optimal rates. This is a novel application of
second-order Gaussian Poincar\'e inequalities, which are well-known in the
probabilistic literature for being a powerful tool to obtain Gaussian
approximations of general functionals of Gaussian stochastic processes. A
generalization of our results to deep Gaussian NNs is discussed. | Alberto Bordino, Stefano Favaro, Sandra Fortini | 2023-04-08T13:52:10Z | http://arxiv.org/abs/2304.04010v1 | # Non-asymptotic approximations of Gaussian neural networks via second-order Poincare inequalities
###### Abstract
There is a growing interest on large-width asymptotic properties of Gaussian neural networks (NNs), namely NNs whose weights are initialized according to Gaussian distributions. A well-established result is that, as the width goes to infinity, a Gaussian NN converges in distribution to a Gaussian stochastic process, which provides an asymptotic or qualitative Gaussian approximation of the NN. In this paper, we introduce some non-asymptotic or quantitative Gaussian approximations of Gaussian NNs, quantifying the approximation error with respect to some popular distances for (probability) distributions, e.g. the 1-Wasserstein distance, the total variation distance and the Kolmogorov-Smirnov distance. Our results rely on the use of second-order Gaussian Poincare inequalities, which provide tight estimates of the approximation error, with optimal rates. This is a novel application of second-order Gaussian Poincare inequalities, which are well-known in the probabilistic literature for being a powerful tool to obtain Gaussian approximations of general functionals of Gaussian stochastic processes. A generalization of our results to deep Gaussian NNs is discussed.
## 1 Introduction
There is a growing interest on large-width asymptotic properties of Gaussian neural networks (NNs), namely NNs whose weights or parameters are initialized according to Gaussian distributions (Neal, 1996; Williams, 1997; Der and Lee, 2005; Garriga-Alonso et al., 2018; Lee et al., 2018; Matthews et al., 2018; Novak et al., 2018; Antognini, 2019; Hanin, 2019; Yang, 2019; Aitken and Gur-Ari, 2020; Andreassen and Dyer, 2020; Bracale et al., 2021; Eldan et al., 2021; Basteri and Trevisan, 2022). Let \(\mathcal{N}(\mu,\sigma^{2})\) be a Gaussian distribution with mean \(\mu\) and variance \(\sigma^{2}\), and consider: i) an input \(\mathbf{x}\in\mathbb{R}^{d}\), with \(d\geq 1\); ii) a collection of (random) weights \(\theta=\{w_{i}^{(0)},w,b_{i}^{(0)},b\}_{i\geq 1}\) such that \(w_{i,j}^{(0)}\stackrel{{ d}}{{=}}w_{j}\), with the \(w_{i,j}^{(0)}\)'s being independent and identically distributed as \(\mathcal{N}(0,\sigma_{w}^{2})\), and \(b_{i}^{(0)}\stackrel{{ d}}{{=}}b\), with the \(b_{i}^{(0)}\)'s being independent and identically distributed as \(\mathcal{N}(0,\sigma_{b}^{2})\) for \(\sigma_{w}^{2},\sigma_{b}^{2}>0\); iii) an activation function \(\tau:\mathbb{R}\rightarrow\mathbb{R}\). Then, a (fully connected
feed-forward) Gaussian NN is defined as follows:
\[f_{\mathbf{x}}(n)[\tau,n^{-1/2}]=b+\frac{1}{n^{1/2}}\sum_{j=1}^{n}w_{j}\tau(\langle w _{j}^{(0)},\mathbf{x}\rangle_{\mathbb{R}^{d}}+b_{j}^{(0)}), \tag{1}\]
with \(n^{-1/2}\) being a scaling factor. Neal (1996) characterized the infinitely wide limit of the NN (1), showing that, as \(n\to+\infty\), for any \(\mathbf{x}\in\mathbb{R}^{d}\) the NN \(f_{\mathbf{x}}(n)[\tau,n^{-1/2}]\) converges in distribution to a Gaussian random variable (RV). That is, as a function of \(\mathbf{x}\), the infinitely wide limit of the NN is a Gaussian stochastic process. The proof is an application of the classical Central Limit Theorem (CLT), thus relying on minimal assumptions on \(\tau\) to ensure that \(\mathbb{E}[(g_{j}(\mathbf{x}))^{2}]\) is finite, where \(g_{j}(\mathbf{x})=w_{j}\tau(\langle w_{j}^{(0)},\mathbf{x}\rangle_{\mathbb{R}^{d}}+b_ {j}^{(0)})\). The result of Neal (1996) has been extended to more general matrix input, i.e. \(p>1\) inputs of dimension \(d\), and to deep Gaussian NNs, assuming a "sequential growth" (Der and Lee, 2005) and a "joint growth" (Matthews et al., 2018) of the width over the NN's layers. These results provide asymptotic or qualitative Gaussian approximations of Gaussian NNs, as they do not provide the rate at which the NN converges to the infinitely wide limit.
### Our contribution
In this paper, we consider non-asymptotic or quantitative Gaussian approximations of the NN (1), quantifying the approximation error with respect to some popular distances for (probability) distributions. To introduce our results, let \(d_{W_{1}}\) be the 1-Wasserstein distance and consider a Gaussian NN with a 1-dimensional unitary input, i.e. \(d=1\) and \(x=1\), unit variance's weight, i.e. \(\sigma_{w}^{2}=1\), and no biases, i.e. \(b_{i}^{(0)}=b=0\) for any \(i\geq 1\). Under this setting, our result states as follows: if \(\tau\in C^{2}(\mathbb{R})\) such \(\tau\) and its first and second derivatives are bounded above by the linear envelope \(a+b|x|^{\gamma}\), for \(a,b,\gamma>0\), and \(N\sim\mathcal{N}(0,\sigma^{2})\) with \(\sigma^{2}\) being the variance of the NN, then for any \(n\geq 1\)
\[d_{W_{1}}(f_{1}(n)[\tau,n^{-1/2}],N)\leq\frac{K_{\sigma^{2}}}{n^{1/2}}, \tag{2}\]
with \(K_{\sigma^{2}}\) being a constant that can be computed explicitly. The polynomial envelope assumption is not new in the study of large-width properties of Gaussian NNs (Matthews et al., 2018; Yang, 2019), and it is critical to achieve the optimal rate \(n^{-1/2}\) in the estimate (2) of the approximation error. In general, we show that an approximation analogous to (2) holds true for the Gaussian NN (1), with the approximation being with respect to the 1-Wasserstein distance, the total variation distance and the Kolmogorov-Smirnov distance. Our results rely on the use of second-order Gaussian Poincare inequalities, or simply second-order Poincare inequalities, first introduced in Chatterjee (2009) and Nourdin et al. (2009) as a powerful tool to obtain Gaussian approximation of general functionals of Gaussian stochastic processes. Here, we make use of some refinements of second-order Poincare inequalities developed in Vidotto (2020), which have the advantage of providing tight estimates of the approximation error, with (presumably) optimal rates. An extension of (2) is presented for Gaussian NNs with \(p>1\) inputs, whereas a generalization of our results to deep Gaussian NNs is discussed with respect to the "sequential growth" and the "joint growth" of the width over the NN's layers.
### Related work
While there exists a vast literature on infinitely wide limits of Gaussian NNs, as well as their corresponding asymptotic approximations, only a few recent works have investigated non-asymptotic approximations of Gaussian NNs. To the best of our knowledge, the work of Eldan et al. (2021) is the first to consider the problem of non-asymptotic approximations of Gaussian NNs, focusing
on NNs with Gaussian distributed weights \(w_{i,j}\)'s and Rademacher distributed weights \(w_{i}\)'s. For such a class of NNs, they established a quantitative CLT in an infinite-dimensional functional space, metrized with the Wasserstein distance, providing rates of convergence to a Gaussian stochastic process. For deep Gaussian NNs (Der and Lee, 2005; Matthews et al., 2018), the work of Basteri and Trevisan (2022) first established a quantitative CLT in the 2-Wasserstein distance, providing the rate at which a deep Gaussian NN converges to its infinitely wide limit. Such a result relies on basic properties of the Wasserstein distance, which allow for a quantitatively tracking the hidden layers and yield a proof by induction, with the triangular inequality being applied to obtain independence from the previous layers. See Favaro et al. (2022) for an analogous result in the sup-norm distance. Our work is close to that of Basteri and Trevisan (2022), in the sense that we deal with NNs for which all the weights are initialized according to Gaussian distributions, and we consider their approximations through Gaussian RVs. The novelty of our work lies on the use of second-order Poincare inequalities, which allow reducing the problem to a direct computation of the gradient and Hessian of the NN, and provide estimates of the approximation error with optimal rate, and tight constants, with respect to other distances than sole Wasserstein distance. This is the first to make use of second-order Poincare inequalities as a tool to obtain non-asymptotic Gaussian approximations of Gaussian NNs.
### Organization of the paper
The paper is structured as follows. In Section 2 we present an overview on second-order Poincare inequalities, recalling some of the main results of Vidotto (2020) that are critical to prove our non-asymptotic Gaussian approximations of Gaussian NNs. Section 3 contains the non-asymptotic Gaussian approximation of the NN (1), as well as its extension for the NN (1) with \(p>1\) inputs, where Section 4 contains some numerical illustrations of our approximations. In Section 5 we discuss the extension of our results to deep Gaussian NNs, and we present some directions for future research.
## 2 Preliminaries on second-order Poincare inequalities
Let \((\Omega,\mathcal{F},\mathbb{P})\) be a generic probability space on which all the RVs are assumed to be defined. We denote by \(\perp\!\!\!\perp\) the independence between RVs, and we make use of the acronym "iid" to refer to RVs that are independent and identically distributed and by \(\|X\|_{L^{q}}:=(\mathbb{E}[X^{q}])^{1/q}\) the \(L^{q}\) norm of the RV \(X\). In this work, we consider some popular distances between (probability) distributions of real-valued RVs. Let \(X\) and \(Y\) be two RVs in \(\mathbb{R}^{d}\), for some \(d\geq 1\). We denote by \(d_{W_{1}}\) the 1-Wasserstein distance, that is,
\[d_{W_{1}}(X,Y)=\sup_{h\in\mathscr{H}}|\mathbb{E}[h(X)]-\mathbb{E}[h(Y)]|,\]
where \(\mathscr{H}\) is the class of all functions \(h:\mathbb{R}^{d}\to\mathbb{R}\) such that it holds true that \(\|h\|_{\mathrm{Lip}}\;\leq 1\), with \(\|h\|_{\mathrm{Lip}}\;=\sup_{x,y\in\mathbb{R}^{d},x\neq y}|h(x)-h(y)|/\|x-y\|_ {\mathbb{R}^{d}}\). We denote by \(d_{TV}\) the total variation distance, that is,
\[d_{TV}(X,Y)=\sup_{B\in\mathscr{B}(\mathbb{R}^{m})}|\mathbb{P}(X\in B)-\mathbb{ P}(Y\in B)|,\]
where \(\mathscr{B}\left(\mathbb{R}^{d}\right)\) is the Borel \(\sigma\)-field of \(\mathbb{R}^{d}\). Finally, we denote by \(d_{KS}\) the Kolmogorov-Smirnov distance, i.e.
\[d_{KS}(X,Y)=\sup_{z_{1},\ldots,z_{d}\in\mathbb{R}}|\mathbb{P}\left(X\in\times_ {i=1}^{d}\left(-\infty,z_{i}\right]\right)-\mathbb{P}\left(Y\in\times_{i=1}^{ d}\left(-\infty,z_{i}\right]\right)|.\]
We recall the following interplays between some of the above distances: i) \(d_{KS}(\cdot,\cdot)\leq d_{TV}(\cdot,\cdot)\); ii) if \(X\) is a real-valued RV and \(N\sim\mathcal{N}(0,1)\) is the standard Gaussian RV then \(d_{KS}(X,N)\leq 2\sqrt{d_{W_{1}}(X,N)}\).
Second-order Poincare inequalities provide a useful tool for Gaussian approximation of general functionals of Gaussian fields (Chatterjee, 2009; Nourdin et al., 2009). See also Nourdin and Peccati (2012) and references therein for a detailed account. For our work, it is useful to recall some results developed in Vidotto (2020), which provide improved versions of the second-order Poincare inequality first introduced in Chatterjee (2009) for random variables and then extended in Nourdin et al. (2009) to general infinite-dimensional Gaussian fields. Let \(N\sim\mathcal{N}(0,1)\). Second-order Poincare inequalities can be seen as an iteration of the so-called Gaussian Poincare inequality, which states that
\[\mathrm{Var}[f(N)]\leq\mathbb{E}[f^{\prime}(N)^{2}] \tag{3}\]
for every differentiable function \(f:\mathbb{R}\to\mathbb{R}\), a result that was first discovered in a work by Nash (1956) and then reproved by Chernoff (1981). The inequality (3) implies that if the \(L^{2}\) norm of the RV \(f^{\prime}(N)\) is small, then so are the fluctuations of the RV \(f(N)\). The first version of a second-order Poincare inequality was obtained in Chatterjee (2009), where it is proved that one can iterate (3) in order to assess the total variation distance between the distribution of \(f(N)\) and the distribution of a Gaussian RV with matching mean and variance. The precise result is stated in the following theorem.
**Theorem 2.1** (Chatterjee (2009) - second-order Poincare inequality).: _Let \(X\sim\mathcal{N}\left(0,I_{d\times d}\right)\). Take any \(f\in C^{2}(\mathbb{R}^{d})\), and \(\nabla f\) and \(\nabla^{2}f\) denote the gradient of \(f\) and Hessian of \(f\), respectively. Suppose that \(f(X)\) has a finite fourth moment, and let \(\mu=\mathbb{E}[f(X)]\) and \(\sigma^{2}=\mathrm{Var}[f(X)]\). Let \(N\sim\mathcal{N}(\mu,\sigma^{2})\) then_
\[d_{TV}(f(X),N)\leq\frac{2\sqrt{5}}{\sigma^{2}}\left\{\mathbb{E}\left[\|\nabla f (X)\|_{\mathbb{R}^{d}}^{4}\right]\right\}^{1/4}\left\{\mathbb{E}\left[\|\nabla ^{2}f(X)\|_{op}^{4}\right]\right\}^{1/4}, \tag{4}\]
_where \(\|\cdot\|_{op}\) stands for the operator norm of the Hessian \(\nabla^{2}f(X)\) regarded as a random \(d\times d\) matrix._
Nourdin et al. (2009) pointed out that the Stein-type inequalities that lead to (4) are special instances of a more general class of inequalities, which can be obtained by combining Stein's method and Malliavin calculus on an infinite-dimensional Gaussian space. In particular, Nourdin et al. (2009) obtained a general version of (4), involving functionals of arbitrary infinite-dimensional Gaussian fields. Both (4) and its generalization in Nourdin et al. (2009) are known to give suboptimal rates of convergence. This is because, in general, it is not possible to obtain an explicit computation of the expectation of the operator norm involved in the estimate of total variation distance, which leads to move further away from the distance in distribution and use bounds on the operator norm instead of computing it directly. To overcome this drawback, Vidotto (2020) adapted to the Gaussian setting an approach recently developed in Last et al. (2016) to obtain second-order Poincare inequalities for Gaussian approximation of Poisson functionals, yielding estimates of the approximation error that are (presumably) optimal. The next theorem states Vidotto (2020, Theorem 2.1) for the special case of a function \(f(X)\), with \(f\in C^{2}\left(\mathbb{R}^{d}\right)\) such that its partial derivatives have sub-exponential growth, and \(X\sim\mathcal{N}\left(0,I_{d\times d}\right)\). See Appendix A for an overview of Vidotto (2020, Theorem 2.1).
**Theorem 2.2** (Vidotto (2020) - 1-dimensional second-order Poincare inequality).: _Let \(F=f(X)\), for some \(f\in C^{2}\left(\mathbb{R}^{d}\right)\), and \(X\sim\mathcal{N}\left(0,I_{d\times d}\right)\) such that \(E[F]=0\) and \(E\left[F^{2}\right]=\sigma^{2}\). Let
\(N\sim\mathcal{N}\left(0,\sigma^{2}\right)\), then_
\[d_{M}(F,N)\leq c_{M}\sqrt{\sum_{l,m=1}^{d}\left\{\mathbb{E}\left[\left(\langle \nabla_{l,\cdot}^{2}F,\nabla_{m,\cdot}^{2}F\rangle\right)^{2}\right]\right\}^{1/ 2}\left\{\mathbb{E}\left[\left(\nabla_{l}F\nabla_{m}F\right)^{2}\right]\right\} ^{1/2}}, \tag{5}\]
_where \(\langle\cdot,\cdot\rangle\) is the scalar product, \(M\in\{TV,KS,W_{1}\},c_{TV}=\frac{4}{\sigma^{2}},c_{KS}=\frac{2}{\sigma^{2}},c _{W_{1}}=\sqrt{\frac{8}{\sigma^{2}\pi}}\) and \(\nabla_{i,\cdot}^{2}F\) is the \(i\)-th row of the Hessian matrix of \(F=f(X)\) while \(\nabla_{i}F\) is the \(i\)-th element of the gradient of \(F\)._
The next theorem generalizes Theorem 2.2 to multidimensional functionals. In particular, for any \(p>1\), the next theorem states Vidotto (2020, Theorem 2.3) for the special case of a function \((f_{1}(X),\ldots,f_{p}(X))\), with \(f_{1},\ldots,f_{p}\in C^{2}\left(\mathbb{R}^{d}\right)\) such that its partial derivatives have sub-exponential growth, and \(X\sim\mathcal{N}\left(0,I_{d\times d}\right)\). See Appendix A for a brief overview of Vidotto (2020, Theorem 2.3).
**Theorem 2.3** (Vidotto (2020) - \(p\)-dimensional second-order Poincare inequality).: _For any \(p>1\) let \((F_{1},\ldots,F_{p})=(f_{1}(X),\ldots,f_{p}(X))\), for some \(f_{1},\ldots,f_{p}\in C^{2}(\mathbb{R}^{d})\), and \(X\sim\mathcal{N}\left(0,I_{d\times d}\right)\) such that \(E\left[F_{i}\right]=0\) for \(i=1,\ldots,p\) and \(E\left[F_{i}F_{j}\right]=c_{ij}\) for \(i,j=1,\ldots,p\), with \(C=\{c_{ij}\}_{i,j=1,\ldots,p}\) being a symmetric and positive definite matrix, i.e. a variance-covariance matrix. Let \(N\sim\mathcal{N}(0,C)\), then_
\[d_{W_{1}}(F,N) \tag{6}\] \[\quad\leq 2\sqrt{p}\left\|C^{-1}\right\|_{2}\|C\|_{2}\sqrt{\sum_{i, k=1}^{p}\sum_{l,m=1}^{d}\left\{\mathbb{E}\left[\left(\langle\nabla_{l,\cdot}^{2}F_{i },\nabla_{m,\cdot}^{2}F_{i}\rangle\right)^{2}\right]\right\}^{1/2}\left\{ \mathbb{E}\left[\left(\nabla_{l}F_{k}\nabla_{m}F_{k}\right)^{2}\right]\right\} ^{1/2}}\]
_where \(\left\|\cdot\right\|_{2}\) is the spectral norm of a matrix._
## 3 Main results
In this section, we present the main result of the paper, namely a non-asymptotic Gaussian approximation of the NN (1), quantifying the approximation error with respect to the 1-Wasserstein distance, the total variation distance and the Kolmogorov-Smirnov distance. It is useful to start with the simple setting of a Gaussian NN with a 1-dimensional unitary input, i.e. \(d=1\) and \(x=1\), unit variance's weight, i.e. \(\sigma_{w}^{2}=1\), and no biases, i.e. \(b_{i}^{(0)}=b=0\) for any \(i\geq 1\). That is, we consider the NN
\[F:=f_{1}(n)[\tau,n^{-1/2}]=\frac{1}{n^{1/2}}\sum_{j=1}^{n}w_{j}\tau(w_{j}^{(0 )}). \tag{7}\]
By means of a straightforward calculation, one has \(\mathbb{E}[F]=0\) and \(\mathrm{Var}[F]=\mathbb{E}_{Z\sim\mathcal{N}(0,1)}[\tau^{2}(Z)]\). As \(F\) in (7) is a function of independent standard Gaussian RVs, Theorem 2.2 can be applied to approximate \(F\) with a Gaussian RV with the same mean and variance as \(F\), quantifying the approximation error.
**Theorem 3.1**.: _Let \(F\) be the NN (7) with \(\tau\in C^{2}(\mathbb{R})\) such that \(|\tau(x)|\leq a+b|x|^{\gamma}\) and \(\left|\frac{d}{dx^{2}}\tau(x)\right|\leq a+b|x|^{\gamma}\) for \(l=1,2\) and some \(a,b,\gamma\geq 0\). If \(N\sim\mathcal{N}(0,\sigma^{2})\) with \(\sigma^{2}=\mathbb{E}_{Z\sim\mathcal{N}(0,1)}[\tau^{2}(Z)]\), then for any \(n\geq 1\)_
\[d_{M}\left(F,N\right)\leq\frac{c_{M}}{\sqrt{n}}\sqrt{3(1+\sqrt{2})}\cdot\|a+b |Z|^{\gamma}\|_{L_{4}}^{2}, \tag{8}\]
_where \(Z\sim\mathcal{N}(0,1)\), \(M\in\{TV,KS,W_{1}\}\), with corresponding constants \(c_{TV}=4/\sigma^{2}\), \(c_{KS}=2/\sigma^{2}\), and \(c_{W_{1}}=\sqrt{8/\sigma^{2}\pi}\)._
Proof.: To apply Theorem 2.2, we start by computing some first and second order partial derivatives. That is,
\[\begin{cases}\frac{\partial F}{\partial w_{j}}=n^{-1/2}\tau(w_{j}^{(0)})\\ \\ \frac{\partial F}{\partial w_{j}^{(0)}}=n^{-1/2}w_{j}\tau^{\prime}(w_{j}^{(0)}) \\ \\ \nabla_{w_{j},w_{i}}^{2}F=0\\ \\ \nabla_{w_{j}^{(0)},w_{i}^{(0)}}^{2}F=n^{-1/2}\tau^{\prime}(w_{j}^{(0)}) \delta_{ij}\\ \\ \nabla_{w_{j}^{(0)},w_{i}^{(0)}}^{2}F=n^{-1/2}w_{j}\tau^{\prime\prime}(w_{j}^{(0 )})\delta_{ij}\end{cases}\]
with \(i,j=1\ldots n\). Then, by a direct application of Theorem 2.2, we obtain the following preliminary estimate
\[d_{M}\left(F,N\right) \leq c_{M}\Bigg{\{}\sum_{j=1}^{n}2\left\{\mathbb{E}\left[\left( \left\langle\nabla_{w_{j}}^{2}.F,\nabla_{w_{j}^{(0)},\cdot}^{2}F\right\rangle \right)^{2}\right]\mathbb{E}\left[\left(\frac{\partial F}{\partial w_{j}} \frac{\partial F}{\partial w_{j}^{(0)}}\right)^{2}\right]\right\}^{1/2}\] \[\quad+\left\{\mathbb{E}\left[\left(\left\langle\nabla_{w_{j}}^{ 2}.F,\nabla_{w_{j},\cdot}^{2}F\right\rangle\right)^{2}\right]\mathbb{E}\left[ \left(\frac{\partial F}{\partial w_{j}}\frac{\partial F}{\partial w_{j}} \right)^{2}\right]\right\}^{1/2}\] \[\quad+\left\{\mathbb{E}\left[\left(\left\langle\nabla_{w_{j}^{ (0)},\cdot}^{2}.F,\nabla_{w_{j}^{(0)},\cdot}^{2}F\right\rangle\right)^{2} \right]\mathbb{E}\left[\left(\frac{\partial F}{\partial w_{j}^{(0)}}\frac{ \partial F}{\partial w_{j}^{(0)}}\right)^{2}\right]\right\}^{1/2}\Bigg{\}}^{1/2},\]
which can be further developed. In particular, we can write the right-hand side of the previous estimate as
\[c_{M}\bigg{\{}\sum_{j=1}^{n}2\left\{\mathbb{E}\left[\left(\frac{ 1}{n}w_{j}\tau^{\prime}\left(w_{j}^{(0)}\right)\tau^{\prime\prime}\left(w_{j }^{(0)}\right)\right)^{2}\right]\mathbb{E}\left[\left(\frac{1}{n}w_{j}\tau \left(w_{j}^{(0)}\right)\tau^{\prime}\left(w_{j}^{(0)}\right)\right)^{2} \right]\right\}^{1/2}\] \[\quad+\left\{\mathbb{E}\left[\left(\frac{1}{\sqrt{n}}\tau^{ \prime}\left(w_{j}^{(0)}\right)\right)^{4}\right]\mathbb{E}\left[\left(\frac{ 1}{\sqrt{n}}\tau\left(w_{j}^{(0)}\right)\right)^{4}\right]\right\}^{1/2}\] \[\quad+\left\{\mathbb{E}\left[\left(\frac{1}{n}\left\{\tau^{ \prime}\left(w_{j}^{(0)}\right)\right\}^{2}+\frac{1}{n}w_{j}^{2}\left\{\tau^{ \prime\prime}\left(w_{j}^{(0)}\right)\right\}^{2}\right)^{2}\right]\mathbb{E }\left[\left(\frac{1}{\sqrt{n}}w_{j}\tau^{\prime}\left(w_{j}^{(0)}\right) \right)^{4}\right]\right\}^{1/2}\bigg{\}}^{1/2}\] \[\quad+\left\{\mathbb{E}\left[\left(\tau^{\prime}\left(w_{j}^{(0) }\right)\right)^{4}\right]\mathbb{E}\left[\left(\tau\left(w_{j}^{(0)}\right) \right)^{4}\right]\right\}^{1/2}\] \[\quad+\left\{\mathbb{E}\left[\left(\left\{\tau^{\prime}\left(w _{j}^{(0)}\right)\right\}^{2}+w_{j}^{2}\left\{\tau^{\prime\prime}\left(w_{j}^{(0 )}\right)\right\}^{2}\right)^{2}\right]\mathbb{E}\left[\left(w_{j}\tau^{ \prime}\left(w_{j}^{(0)}\right)\right)^{4}\right]\right\}^{1/2}\bigg{\}}^{1/2}\]
\[\stackrel{{(idid)}}{{=}}\frac{c_{M}}{\sqrt{n}}\bigg{\{}2 \left\{\mathbb{E}\left[\left(\tau^{\prime}\left(Z\right)\tau^{\prime\prime}\left(Z \right)\right)^{2}\right]\mathbb{E}\left[\left(\tau\left(Z\right)\tau^{\prime} \left(Z\right)\right)^{2}\right]\right\}^{1/2}\] \[\quad+\left\{\mathbb{E}\left[\left(\tau^{\prime}\left(Z\right) \right)^{4}\right]\mathbb{E}\left[\left(\tau\left(Z\right)\right)^{4}\right] \right\}^{1/2}\] \[\quad+\left\{\mathbb{E}\left[\left(\left\{\tau^{\prime}\left(Z \right)\right\}^{2}+w_{j}^{2}\left\{\tau^{\prime\prime}\left(Z\right)\right\}^ {2}\right)^{2}\right]\mathbb{E}\left[\left(w_{j}\tau^{\prime}\left(Z\right) \right)^{4}\right]\right\}^{1/2}\bigg{\}}^{1/2}\] \[\quad+\left\{\mathbb{E}\left[\left(\left\{\tau^{\prime}\left(Z \right)\right\}^{4}\right]\mathbb{E}\left[\left(\tau\left(Z\right)\right)^{4} \right]\right\}^{1/2}\] \[\quad+\left\{\mathbb{E}\left[\left(\left\{\tau^{\prime}\left(Z \right)\right\}^{4}\right]+2\mathbb{E}\left[\left\{\tau^{\prime}\left(Z \right)\right\}^{2}\left\{\tau^{\prime\prime}\left(Z\right)\right\}^{2}\right] +3\mathbb{E}\left[\left\{\tau^{\prime\prime}\left(Z\right)\right\}^{4}\right] \right)3\mathbb{E}\left[\left\{\tau^{\prime}\left(Z\right)\right\}^{4} \right]\right\}^{1/2}\] \[\quad+\left\{\mathbb{E}\left[\left|\tau^{\prime}\left(Z\right) \right|^{4}\right]\mathbb{E}\left[\left|\tau\left(Z\right)\right|^{4}\right] \right\}^{1/2}\] \[\quad+\left\{\left(\mathbb{E}\left[\left|\tau^{\prime}\left(Z \right)\right|^{4}\right]+2\mathbb{E}\left[\left|\tau^{\prime}\left(Z\right) \right|^{2}\right]+3\mathbb{E}\left[\left|\tau^{\prime\prime}\left(Z\right) \right|^{4}\right]\right)3\mathbb{E}\left[\left|\tau^{\prime}\left(Z\right) \right|^{4}\right]\right\}^{1/2}\bigg{\}}^{1/2},\]
where \(Z\sim\mathcal{N}(0,1)\). Now, since \(\tau\) is polynomially bounded and the square root is an increasing function,
\[d_{M}\left(F,N\right)\leq\frac{c_{M}}{\sqrt{n}}\bigg{\{}2 \left\{\mathbb{E}\left[\left(a+b|Z|^{\gamma}\right)^{4}\right]\mathbb{E} \left[\left(a+b|Z|^{\gamma}\right)^{4}\right]\right\}^{1/2}\] \[\quad+\left\{\mathbb{E}\left[\left(a+b|Z|^{\gamma}\right)^{4} \right]\mathbb{E}\left[\left(a+b|Z|^{\gamma}\right)^{4}\right]\right\}^{1/2}\] \[\quad+\left\{18\mathbb{E}\left[\left(a+b|Z|^{\gamma}\right)^{4} \right]\mathbb{E}\left[\left(a+b|Z|^{\gamma}\right)^{4}\right]\right\}^{1/2} \bigg{\}}^{1/2}\] \[=\frac{c_{M}}{\sqrt{n}}\sqrt{3\sqrt{2+3}}\left\{\mathbb{E}\left[ \left(a+b|Z|^{\gamma}\right)^{4}\right]\right\}^{1/2}\] \[=\frac{c_{M}}{\sqrt{n}}\sqrt{3(1+\sqrt{2})}\|a+b|Z|^{\gamma}\|_{L _{4}}^{2},\]
where \(Z\sim\mathcal{N}(0,1)\).
The proof of Theorem 3.1 shows how a non-asymptotic approximation of \(F\) can be obtained by a direct application of Theorem 2.2. In particular, the estimate (8) of the approximation error \(d_{M}\left(F,N\right)\) has the optimal rate \(n^{-1/2}\) with respect to the \(1\)-Wasserstein distance, the total variation distance and the Kolmogorov-Smirnov distance. As for the constant, it depends on the variance \(\mathbb{E}_{Z\sim\mathcal{N}(0,1)}[\tau^{2}(Z)]\) of \(F\). Once the activation function \(\tau\) is specified, \(\mathbb{E}_{Z\sim\mathcal{N}(0,1)}[\tau^{2}(Z)]\) can be evaluated by means of an exact or approximate calculation, or a suitable lower bound for it can be provided.
Now, we extend Theorem 3.1 to the more general case of the Gaussian NN (1), showing that the problem still reduces to an application of Theorem 2.2. In particular, it is convenient to write (1) as follows:
\[F:=\frac{1}{n^{1/2}}\sigma_{w}\sum_{j=1}^{n}w_{j}\tau(\sigma_{w} \langle w_{j}^{(0)},\mathbf{x}\rangle+\sigma_{b}b_{j}^{(0)})+\sigma_{b}b, \tag{9}\]
with \(w_{j}^{(0)}=[w_{j,1}^{(0)},\ldots,w_{j,d}^{(0)}]^{T}\) and \(w_{j}\overset{d}{=}w_{j,i}^{(0)}\overset{\text{iid}}{\sim}\ \mathcal{N}(0,1)\). We set \(\Gamma^{2}=\sigma_{w}^{2}\|\mathbf{x}\|^{2}+\sigma_{b}^{2}\), and for \(n\geq 1\) we consider a collection \((Y_{1},\ldots,Y_{n})\) of independent standard Gaussian RVs. Then, from (9) we can write
\[F\overset{d}{=}\frac{1}{n^{1/2}}\sigma_{w}\sum_{j=1}^{n}w_{j}\tau \left(\Gamma Y_{j}\right)+\sigma_{b}b.\]
As before, straightforward calculations leads to \(\mathbb{E}[F]=0\) and \(\operatorname{Var}[F]=\sigma_{w}^{2}\mathbb{E}_{Z\sim\mathcal{N}(0,1)}\left[ \tau^{2}\left(\Gamma Z\right)\right]+\sigma_{b}^{2}\). As \(F\) in (9) is a function of independent standard Gaussian RVs, Theorem 2.2 can be applied to approximate \(F\) with a Gaussian RV with the same mean and variance as \(F\), quantifying the approximation error. This approximation is stated in the next theorem, whose proof is in Appendix B.
**Theorem 3.2**.: _Let \(F\) be the NN (9) with \(\tau\in C^{2}(\mathbb{R})\) such that \(|\tau(x)|\leq a+b|x|^{\gamma}\) and \(\left|\frac{d}{dx^{2}}\tau(x)\right|\leq a+b|x|^{\gamma}\) for \(l=1,2\) and some \(a,b,\gamma\geq 0\). If \(N\sim\mathcal{N}(0,\sigma^{2})\) with \(\sigma^{2}=\sigma_{w}^{2}\mathbb{E}_{Z\sim\mathcal{N}(0,1)}\left[\tau^{2} \left(\Gamma Z\right)\right]+\sigma_{b}^{2}\) and \(\Gamma=(\sigma_{w}^{2}\|\mathbf{x}\|^{2}+\sigma_{b}^{2})^{1/2}\), then for any \(n\geq 1\)_
\[d_{M}\left(F,N\right)\leq\frac{c_{M}\sqrt{\Gamma^{2}+\Gamma^{4}(2 +\sqrt{3(1+2\Gamma^{2}+3\Gamma^{4})})}\|a+b|\Gamma Z|^{\gamma}\|_{L^{4}}^{2}} {\sqrt{n}}, \tag{10}\]
_where \(Z\sim\mathcal{N}(0,1)\), \(M\in\{TV,KS,W_{1}\}\), with corresponding constants \(c_{TV}=4/\sigma^{2},c_{KS}=2/\sigma^{2},c_{W_{1}}=\sqrt{8/\sigma^{2}\pi}\)._
We observe that Theorem 3.1 can be recovered from Theorem 3.2. In particular, the estimate (8) of the approximation \(d_{M}\left(F,N\right)\) can be recovered from the estimate (10) by setting \(\sigma_{b}=0\), \(\sigma_{w}=1\) and \(\mathbf{x}=1\). As for Theorem 3.1, the constant depends on the variance \(\sigma_{w}^{2}\mathbb{E}_{Z\sim\mathcal{N}(0,1)}\left[\tau^{2}\left(\Gamma Z \right)\right]+\sigma_{b}^{2}\) of \(F\). Therefore, to apply Theorem 3.2 one needs to evaluate the variance of \(F\), by means of an exact or approximate calculation, or to provide a suitable lower bound for it, as we have discussed previously.
We conclude by presenting an extension of Theorem 3.2 to a Gaussian NN with \(p>1\) inputs \([\mathbf{x_{1}},\ldots,\mathbf{x_{p}}]^{T}\), where \(\mathbf{x_{i}}\in\mathbb{R}^{d}\) for \(i=1,\ldots,p\). In particular, we consider the NN \(F:=[F_{1},\ldots,F_{p}]^{T}\) where
\[F_{i}:=\frac{1}{n^{1/2}}\sigma_{w}\sum_{j=1}^{n}w_{j}\tau(\sigma_ {w}\langle w_{j}^{(0)},\mathbf{x_{i}}\rangle+\sigma_{b}b_{j}^{(0)})+\sigma_{b}b, \tag{11}\]
with \(w_{j}^{(0)}=[w_{j,1}^{(0)},\ldots,w_{j,d}^{(0)}]^{T}\) and \(w_{j}\overset{d}{=}w_{j,i}^{(0)}\overset{d}{=}b_{j}^{(0)}\overset{d}{=}b \overset{\text{iid}}{\sim}\ \mathcal{N}(0,1)\). Since the parameter are jointly distributed according to multivariate standard Gaussian distribution, Theorem 2.3 can be applied to approximate \(F\) with a Gaussian random vector whose mean and covariance are the same as \(F\). The resulting estimate of the approximation error depends on the maximum and the minimum eigenvalues, i.e. \(\lambda_{1}(C)\) and \(\lambda_{p}(C)\) respectively, of the covariance matrix \(C\), whose \((i,k)\)-th entry is given by
\[\mathbb{E}[F_{i}F_{k}]=\sigma_{w}^{2}\mathbb{E}[\tau(Y_{i})\tau(Y_{k})]+\sigma _{b}^{2}, \tag{12}\]
where \(Y\sim\mathcal{N}(0,\sigma_{w}^{2}X^{T}X+\sigma_{b}^{2}\mathbf{1}\mathbf{1}^{T})\), with \(\mathbf{1}\) being the all-one vector of dimension \(p\) and \(X\) being the \(n\times p\) matrix of the inputs \(\{\mathbf{x_{i}}\}_{i\in[p]}\). This approximation is stated in the next theorem, whose proof is in Appendix C.
**Theorem 3.3**.: _Let \(F=[F_{1}\ldots,F_{p}]^{T}\) with \(F_{i}\) being the NN (11), for \(i=1,\ldots,p\), with \(\tau\in C^{2}(\mathbb{R})\) such that \(|\tau(x)|\leq a+b|x|^{\gamma}\) and \(\left|\frac{d}{dx^{t}}\tau(x)\right|\leq a+b|x|^{\gamma}\) for \(l=1,2\) and some \(a,b,\gamma\geq 0\). Furthermore, let \(C\) be the covariance matrix of \(F\), whose entries are given in (12), and define \(\Gamma_{i}^{2}=\sigma_{w}^{2}||\mathbf{x_{i}}||^{2}+\sigma_{b}^{2}\) and \(\Gamma_{ik}=\sigma_{w}^{2}\sum_{j=1}^{d}|x_{ij}x_{kj}|+\sigma_{b}^{2}\). If \(N=[N_{1},\cdots N_{p}]^{T}\sim\mathcal{N}(0,C)\), then for any \(n\geq 1\)_
\[d_{W_{1}}\left(F,N\right)\leq 2\sigma_{w}^{2}\tilde{K}\frac{\lambda_{1}(C)}{ \lambda_{p}(C)}\sqrt{\frac{p}{n}}, \tag{13}\]
_where \(\lambda_{1}(C)\) and \(\lambda_{p}(C)\) are the maximum and the minimum eigenvalues of \(C\), respectively, and where_
\[\tilde{K}=\bigg{\{}\sum_{i,k=1}^{p}(\Gamma_{i}^{2}+\sqrt{3(1+2\Gamma_{i}^{2}+3 \Gamma_{i}^{4})}\Gamma_{ik}^{2}+2\Gamma_{i}^{2}\Gamma_{ik})\|a+b|\Gamma_{i}Z| ^{\gamma}\|_{L^{4}}^{2}\|a+b|\Gamma_{k}Z|^{\gamma}\|_{L^{4}}^{2}\bigg{\}}^{1/2},\]
_with \(Z\sim\mathcal{N}(0,1)\)._
The estimate (13) of the approximation error \(d_{W_{1}}\left(F,N\right)\) depends on the spectral norms of the covariance matrix \(C\) and the precision matrix \(C^{-1}\). Such spectral norms must be computed explicitly for the specific activation \(\tau\) in use, or at least bounded from above, in order to apply Theorem 3.3. This boils down to finding the greatest eigenvalue \(\lambda_{1}\) and the smallest eigenvalue \(\lambda_{p}\) of the matrix \(C\), which can be done for a broad class of activations with classical optimization techniques, or at least bounding \(\lambda_{1}\) from above and \(\lambda_{p}\) from below (Diaconis and Stroock, 1991; Guattery et al., 1999).
## 4 Numerical illustrations
In this section, we present a brief simulation study for two specific choices of the activation function: i) \(\tau(x)=\tanh x\), which is polynomially bounded with parameters \(a=1\) and \(b=0\); ii) \(\tau(x)=x^{3}\), which is polynomially bounded with parameters \(a=6\), \(b=1\) and \(\gamma=3\). Each of the plots below is obtained as follows: for a fixed width of \(n=k^{3}\), with \(k\in\{1,\cdots,16\}\), we simulate 5000 points from a SLNN as in Theorem 3.1 to produce an estimate of the distance between the NN and a Gaussian RV with mean \(0\) and variance \(\sigma^{2}\), which is estimated by means of a Monte-Carlo approach. Estimates of the KS and TV distance are produced by means of the functions _KolmogorovDist_ and _TotVarDist_ from the package **distEx** by Ruckdeschel et al. (2006) while those of the 1-Wasserstein distance using the function _wasserstein1d_ from the package **transport** by Schuhmacher et al. (2022). We repeat this procedure 500 times for every fixed \(n=k^{3}\) with \(k\in\{1,\cdots,16\}\), compute the sample mean (black dots) and the 2.5-th and the 97.5-th sample percentiles (red dashed lines), and compare these estimates with the theoretical bound given by Theorem 3.1 (blue line).
The plots confirm that the distance between a shallow NN and an arbitrary Gaussian RV, with the same mean and variance, is asymptotically bounded from above by \(n^{-1/2}\) and that the approximation gets better and better as \(n\to\infty\). This is evident in the case \(\tau(x)=x^{3}\), where there is a clear decay between \(n=1\) and \(n=1000\). This behaviour does not show up for \(\tau(x)=\tanh x\), since \(\tanh x\sim x\), for \(x\to 0\), and Gaussian RVs are more likely to attain values in a neighbourhood of zero.
Figure 1: Estimates of the Kolmogorov-Smirnov distance for a Shallow NN of varying width \(n=k^{3}\), \(k\in\{1,\cdots,16\}\), with \(\tau(x)=\tanh x\) (left) and \(\tau(x)=x^{3}\) (right). The blue line is the theoretical bound of Theorem 3.1, the black dots are sample means of the Monte-Carlo sample, while the red-dashed lines represent a \(95\%\) sample confidence interval.
Figure 3: Estimates of the 1-Wasserstein distance for a Shallow NN of varying width \(n=k^{3}\), \(k\in\{1,\cdots,16\}\), with \(\tau(x)=\tanh x\) (left) and \(\tau(x)=x^{3}\) (right).
Figure 2: Estimates of the Total Variation distance for a Shallow NN of varying width \(n=k^{3}\), \(k\in\{1,\cdots,16\}\), with \(\tau(x)=\tanh x\) (left) and \(\tau(x)=x^{3}\) (right).
Discussion
We introduced some non-asymptotic Gaussian approximations of Gaussian NNs, quantifying the approximation error with respect to the 1-Wasserstein distance, the total variation distance and the Kolmogorov-Smirnov distance. As a novelty, our work relies on the use of second-order Poincare inequalities, which lead to estimates of the approximation error with optimal rate and tight constants. This is the first work to make use of second-order Poincare inequalities for non-asymptotic Gaussian approximations of Gaussian NNs. For a Gaussian NN with a single input, the estimate in Theorem 3.2 requires to evaluate or estimate, whereas for a Gaussian NN with inputs, the estimate in Theorem 3.3 requires to evaluate or estimate. Our approach based on second-order Poincare inequalities remains valid in the more general setting of deep Gaussian NNs. Both Theorem 3.2 and Theorem 3.3 can be extended to deep Gaussian NNs, at the cost of more involved algebraic calculations, as well as more involved estimates of the approximation errors. For instance, for an input one may consider a deep Gaussian NN with layers, i.e.
(14)
with
(15)
and apply Theorem 2.2 to as defined in (14) and (15). Such an application implies to deals with complicated expressions of the gradient and the Hessian that, however, is a purely algebraic problem.
Related to the choice of the activation, one can also try to relax the hypothesis of polynomially boundedness and use a whatever. There is nothing wrong in doing it, as Corollary 2.2 and 2.3 still apply, with the only difference that the bound would be less explicit than the one we found here. Furthermore, one could also think about relaxing the hypothesis to include or just continuous activations, like the famous ReLU function (i.e. ) which is excluded from our analysis. Some results in this direction can be found in Eldan et al. (2021), though using Rademacher weights for the hidden layer. In this regard, we try to derive a specific bound for the ReLU function applying Corollary 2.2 to a sequence of smooth approximating functions and then passing to the limit. In particular, we approximated the ReLU function with for and applied Theorem 2.2 to a generic using the 1-Wasserstein distance and obtained a bound dependent on. Then, the idea would have been to take the limit of this bound for and hopefully obtain a non-trivial
bound, but that was not the case as the limit exploded. The same outcome was found using the SAU approximating sequence, i.e.
\[H(m,x):=\frac{1}{m\sqrt{2\pi}}\exp\biggl{\{}-\frac{1}{2}m^{2}x^{2}\biggr{\}}+ \frac{x}{2}+\frac{x}{2}\operatorname{erf}\left\{\frac{mx}{\sqrt{2}}\right\},\]
where \(\operatorname{erf}\left(\cdot\right)\) denotes the error function. This fact probably indicates the impossibility to apply the results of Vidotto (2020) in the context of continuous activation functions as the ReLU function, and the necessity to come up with new results on second-order Poincare inequalities to fill this gap. These results would not be trivial aft all, since Theorem A.2 needs each \(F_{1},\ldots,F_{d}\) to be in \(\mathbb{D}^{2,4}\), and so two degrees of smoothness are required. This is not "the fault" of Vidotto (2020), but it is due to the intrinsic character of the equation \(f^{\prime\prime}(x)-xf^{\prime}(x)=h(x)-Eh(Z)\) with \(Z\sim\mathcal{N}(0,1)\) in dimension \(p\geq 2\).
## Acknowledgements
Stefano Favaro is grateful to Professor Larry Goldstein for having introduced him to second-order Poincare inequalities, and to Professor Dario Trevisan for the many stimulating conversations on Gaussian NNs. Stefano Favaro received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement No 817257. Stefano Favaro is also affiliated to IMATI-CNR "Enrico Magenes" (Milan, Italy).
|
2306.05211 | Boosting-based Construction of BDDs for Linear Threshold Functions and
Its Application to Verification of Neural Networks | Understanding the characteristics of neural networks is important but
difficult due to their complex structures and behaviors. Some previous work
proposes to transform neural networks into equivalent Boolean expressions and
apply verification techniques for characteristics of interest. This approach is
promising since rich results of verification techniques for circuits and other
Boolean expressions can be readily applied. The bottleneck is the time
complexity of the transformation. More precisely, (i) each neuron of the
network, i.e., a linear threshold function, is converted to a Binary Decision
Diagram (BDD), and (ii) they are further combined into some final form, such as
Boolean circuits. For a linear threshold function with $n$ variables, an
existing method takes $O(n2^{\frac{n}{2}})$ time to construct an ordered BDD of
size $O(2^{\frac{n}{2}})$ consistent with some variable ordering. However, it
is non-trivial to choose a variable ordering producing a small BDD among $n!$
candidates.
We propose a method to convert a linear threshold function to a specific form
of a BDD based on the boosting approach in the machine learning literature. Our
method takes $O(2^n \text{poly}(1/\rho))$ time and outputs BDD of size
$O(\frac{n^2}{\rho^4}\ln{\frac{1}{\rho}})$, where $\rho$ is the margin of some
consistent linear threshold function. Our method does not need to search for
good variable orderings and produces a smaller expression when the margin of
the linear threshold function is large. More precisely, our method is based on
our new boosting algorithm, which is of independent interest. We also propose a
method to combine them into the final Boolean expression representing the
neural network. | Yiping Tang, Kohei Hatano, Eiji Takimoto | 2023-06-08T14:09:38Z | http://arxiv.org/abs/2306.05211v1 | Boosting-based Construction of BDDs for Linear Threshold Functions and Its Application to Verification of Neural Networks
###### Abstract
Understanding the characteristics of neural networks is important but difficult due to their complex structures and behaviors. Some previous work proposes to transform neural networks into equivalent Boolean expressions and apply verification techniques for characteristics of interest. This approach is promising since rich results of verification techniques for circuits and other Boolean expressions can be readily applied. The bottleneck is the time complexity of the transformation. More precisely, (i) each neuron of the network, i.e., a linear threshold function, is converted to a Binary Decision Diagram (BDD), and (ii) they are further combined into some final form, such as Boolean circuits. For a linear threshold function with \(n\) variables, an existing method takes \(O(n2^{\frac{n}{2}})\) time to construct an ordered BDD of size \(O(2^{\frac{n}{2}})\) consistent with some variable ordering. However, it is non-trivial to choose a variable ordering producing a small BDD among \(n!\) candidates.
We propose a method to convert a linear threshold function to a specific form of a BDD based on the boosting approach in the machine learning literature. Our method takes \(O(2^{n}\text{poly}(1/\rho))\) time and outputs BDD of size \(O(\frac{n^{2}}{\rho^{4}}\ln\frac{1}{\rho})\), where \(\rho\) is the margin of some consistent linear threshold function. Our method does not need to search for good variable orderings and produces a smaller expression when the margin of the linear threshold function is large. More precisely, our method is based on our new boosting algorithm, which is of independent interest. We also propose a method to combine them into the final Boolean expression representing the neural network. In our experiments on verification tasks of neural networks, our methods produce smaller final Boolean expressions, on which the verification tasks are done more efficiently.
Convolutional Neural Network Binary decision diagram Boosting Verification.
## 1 Introduction
Interpretability of Neural Networks (NNs) has been relevant since their behaviors are complex to understand. Among many approaches to improve interpretability, some results apply verification techniques of Boolean functions to understand NNs, where NNs are represented as an equivalent Boolean function and then various verification methods are used to check criteria such as robustness [10]Liu, Malon, Xue, and Kruus, Mangal et al.(2019)Mangal, Nori, and Orso, [20]Weng, Zhang, Chen, Yi, Su, Gao, Hsieh, and Daniel, Yu et al.(2019)Yu, Qin, Liu, Zhao, Wang, and Chen, Zheng et al.(2016)Zheng, Song, Leung, and Goodfellow. This
approach is promising in that rich results of Boolean function verification can be readily applied. The bottleneck, however, is to transform a NN into some representation of the equivalent Boolean function.
A structured way of transforming NNs to Boolean function representations is proposed by [14, 15]. They proposed (i) to transform each neuron, i.e., a linear threshold function, into a Binary Decision Diagram (BDD) and then (ii) to combine BDDs into a final Boolean function representations such as Boolean circuits. In particular, the bottleneck is the transformation of a linear threshold function to a BDD. To do this, they use the transformation method of [14]. The method is based on dynamic programming, and its time complexity is \(O(n2^{\frac{n}{2}})\) and the size of resulting BDD is \(O(2^{\frac{n}{2}})\), where \(n\) is the number of the variables. In addition, the method requires a fixed order of \(n\) variables as an input and outputs the minimum BDD consistent with the order. Thus, to obtain the minimum BDD, it takes \(O(n!n2^{\frac{n}{2}})\) time by examining \(n!\) possible orderings. Even if we avoid the exhaustive search of orderings, it is non-trivial to choose a good ordering.
In this paper, we propose an alternative method to obtain a specific form of BDD representation (named Aligned Binary Decision Diagram, ABDD) of a linear threshold function. Our approach is based on _Boosting_, a framework of machine learning which combines base classifiers into a better one. More precisely, our method is a modification of the boosting algorithm of Mansour and McAllester [16]. Given a set of labeled instances of a linear threshold function, their algorithm constructs a BDD that is consistent with the instances in a top-down greedy way. The algorithm can be viewed as a combination of greedy decision tree learning and a process of merging nodes. Given a linear threshold function \(f(x)=\sigma(w\cdot x+b)\) where \(\sigma\) is the step function, we can apply the algorithm of Mansour and McAllester by feeding all \(2^{n}\) possible labeled instances of \(f\) and obtain a BDD representation of \(f\) of the size \(O(\frac{n^{2}}{\rho^{4}}\ln\frac{1}{\rho})\) in time \(O(2^{n}\text{poly}(1/\rho))\), where \(\rho\) is the margin of \(f\), defined as \(\rho=\min_{x\in\{-1,1\}^{n}}|w\cdot x+b|/\|w\|_{1}\). An advantage of the method is that the resulting BDD is small if the linear threshold function has a large margin. Another merit is that the method does not require a variable ordering as an input. However, in our initial investigation, we observe that the algorithm is not efficient enough in practice. Our algorithm, in fact, a boosting algorithm, is obtained by modifying their algorithm so that we only use one variable (base classifier) in each layer. We show that our modification still inherits the same theoretical guarantees as Mansour and McAllester's. Furthermore, surprisingly, the small change makes the merging process more effective and produces much smaller BDDs in practice. Our modification might look easy but is non-trivial in a theoretical sense. To achieve the same theoretical guarantee, we introduce a new information-theoretic criterion to choose variables that is different from the previous work. That is one of our technical contributions.
In our experiments on verification tasks of Convolutional Neural Networks (CNNs), by following the same procedures as [14, 15], we construct smaller BDDs and resulting Boolean representations of CNNs faster than in previous work, thus contributing to more efficient verification.
This paper is organized as follows: Section 2 overviews the preliminaries of binary NN (BNN), BDD, Ordered BDD (OBDD), and Aligned BDD (ABDD). Section 3 and 4 detail our proposed method to construct ABDD. Section 5 details the construction of the Boolean circuit and SDD. Section 6 handles the experimental results with analysis, followed by the conclusion in Section 7.
Related workThe work in [16]Narodytska, Kasiviswanathan, Ryzhyk, Sagiv, and Walsh] proposed a precise Boolean encoding of BNNs that allows easy network verification. However, it only works with small-sized networks. [14]Shih, Darwiche, and Choi] leveraged the Angluin-style learning algorithm to convert the BNN (the weights and input are binarized as \(\{-1,1\}\)) and OBDD into Conjunctive Normal Form (CNF) and then used the Boolean Satisfiability (SAT) solver to verify the equivalence of its produced CNF. However, they modified the OBDD several times and utilized limited binary network weights. [14] suggested a method to convert a linear threshold classifier with real-valued weight into OBDD. However, their approach owns time complexity of \(O(n!n2^{\frac{n}{2}})\) and OBDD size complexity of \(O(2^{\frac{n}{2}})\) via searching the full ordering, which increases exponentially
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Method & DD type & Size & Time \\ \hline
[14] & OBDD & \(O(2^{\frac{n}{2}})\) & \(O(n!n2^{\frac{n}{2}})\) \\ \hline
[16] & BDD & \(O(\frac{n}{\rho^{4}}\ln\frac{1}{\rho})\) & \(O(2^{n}\text{poly}(1/\rho))\) \\ \hline Ours & ABDD & \(O(\frac{n}{\rho^{4}}\ln\frac{1}{\rho})\) & \(O(2^{n}\text{poly}(1/\rho))\) \\ \hline \hline (cf. [14, 15]Shih, Darwiche, and Choi]) & OBDD & \(O(nW)\) & \(O(nW)\) \\ \hline \end{tabular}
\end{table}
Table 1: Time and size for several methods to convert to DDs from a given linear threshold function (LTF, for short) of margin \(\rho\). The fourth result is only for LTFs with integer weights whose \(L2\)-norm is \(W\).
when \(n\) becomes larger. Still, [Narodytska et al.(2018)Narodytska, Kasiviswanathan, Ryzhyk, Sagiv, and Walsh, Shih et al.(2019)Shih, Darwiche, and Choi, Chan and Darwiche(2003)] can only handle small dimension NN weight, and the large Boolean expression was represented as Sentential Decision Diagram (SDD), which owns enormous time complexity. Moreover, [Chorowski and Zurada(2011)] proposed a rule extraction method inspired by the 'rule extraction as learning" approach to express NN into a Reduced Ordered DD (RODD), which has the time complexity of \(O(n2^{2n})\).
## 2 Preliminaries
### Binary Neural Network
A binary neural network (BNN) is a variant of the standard NN with binary inputs and outputs [Bshouty et al.(1998)Bshouty, Tamon, and Wilson]. In this paper, each neural unit, with a step activation function \(\sigma\), is formulated as follows:
\[\sigma(\sum_{i}x_{i}w_{i}+b)=\begin{cases}1,&\sum_{i}x_{i}w_{i}+b\geq 0\\ -1,&\text{otherwise}\end{cases} \tag{1}\]
where \(x\in\{-1,1\}^{n}\), \(w\in\mathbb{R}^{n}\) and \(b\in\mathbb{R}\) are the input, the weight vector and the bias of this neural unit, respectively.
### Definition of BDD, OBDD and ABDD
A binary decision diagram (BDD) \(T\) is defined as a tuple \(T=(V,E,l)\) with the following properties: (1) \((V,E)\) is a directed acyclic graph with a root and two leaves, where \(V\) is the set of nodes, \(E\) is the set of edges such that \(E=E_{-}\cup E_{+}\), \(E_{-}\cap E_{+}=\varnothing\). Elements of \(E_{+}\) and \(E_{-}\)'s are called \(+\)-edges, and \(--\)edges, respectively. Let \(L=\{\text{0-leaf, 1-leaf}\}\subset V\) be the set of leaves. For each \(v\in V\), there are two child nodes \(v^{-},v^{+}\in V\) such that \((v,v^{-})\in E_{-}\) and \((v,v^{+})\in E_{+}\). (2) \(l\) is a function from \(V\setminus L\) to \([n]\).
Given an instance \(x\in\{-1,1\}^{n}\) and a BDD \(T\), we define the corresponding path \(P(x)=(v_{0},v_{1},\ldots,v_{k-1},v_{k})\in V^{*}\) over \(T\) from the root to a leaf as follows: (1) \(v_{0}\) is the root. (2) for any \(j=0,\ldots k-1\), we have \((v_{j},v_{j+1})\in E_{+}\Leftrightarrow x_{l(v_{j})}=1\) and \((v_{j},v_{j+1})\in E_{-}\Leftrightarrow x_{l(v_{j})}=-1\). (3) \(v_{k}\) is a leaf node. We say that an instance \(x\in\{-1,1\}^{n}\) reaches node \(u\) in \(T\), if \(P(x)\) contains \(u\). Then, a BDD \(T\) naturally defines the following function \(h_{T}:\{-1,1\}^{n}\rightarrow\{-1,1\}\) such that
\[h_{T}(x)\triangleq\begin{cases}-1,&\text{$x$ reaches $0$-leaf}\\ 1,&\text{$x$ reaches $1$-leaf}.\end{cases} \tag{2}\]
Given a BDD \(T=(V,E,l)\), we define the depth of a node \(u\in V\) as the length of the longest path from the root to \(u\). An ordered BDD (OBDD) \(T=(V,E,l)\) is a BDD satisfying an additional property: There is a strict total order \(<_{[n]}\) on \([n]\) such that for any path \(P=(v_{0},\ldots,v_{k})\) on from the root to a leaf, and any nodes \(v_{i}\) and \(v_{j}\)\((i<j<k)\), \(l(v_{i})<_{[n]}l(v_{j})\). An Aligned BDD (ABDD) \(T=(V,E,l)\) is defined as a BDD satisfying that for any nodes \(u,v\in V\setminus L\) with the same depth, \(l(u)=l(v)\). We employ \(v_{i,j}\) to appear the positional information of a node in the BDD, where \(j\) represents the depth of the node, and \(i\) represents the position of the node at depth \(j\).
BDD, OBDD and ABDD are illustrated in Figure 1.
### Instance-based Robustness (IR), Model-based Robustness (MR) and Sample-based Robustness (SR)
Robustness is a fundamental property of the neural network, which represents the tolerance of the network to noise or white attacks. For binary input images, the robustness \(k\) represents that as long as at least \(k\) pixels are flipped from 0 to 1 or 1 to 0, and the neural network's output will be changed.
We define the IR and MR of a network as follows.
**Definition 1**.: _(Instance-based Robustness) [Shi et al.(2020)Shi, Shih, Darwiche, and Choi] Consider a classification function \(f:\{-1,1\}^{n}\rightarrow\{-1,1\}\) and a given instance \(x\). The robustness of the classification of \(x\) by \(f\), denoted by \(r_{f}(x)\). If \(f\) is not a trivial function (always \(True\) or \(False\)),_
\[r_{f}(x)=\min_{x^{\prime};f(x)\neq f(x^{\prime})}dis(x,x^{\prime}) \tag{3}\]
_where \(dis(x,x^{\prime})\) denotes the Hamming distance between \(x\) and \(x^{\prime}\)._
**Definition 2**.: _(Model-based Robustness) [20][21, 22]. The Model-based Robustness of \(f\) is defined as:_
\[MR(f)=\frac{1}{2^{n}}\sum_{x}r_{f}(x). \tag{4}\]
However, we consider that computing MR on full-size data is not practically meaningful. In practical applications, robustness validation based on sample data is common. Here, we regard the samples in the dataset as instances randomly extracted from the full-size data under the uniform distribution. Then, we have the following definition.
**Definition 3**.: _(Sample-based Robustness, SR) Consider a classification function \(f:\left\{-1,1\right\}^{n}\rightarrow\left\{-1,1\right\}\). Given a sample \(S\) under uniform distribution from \(\{-1,1\}^{n}\). The Sample-based Robustness of \(f\) is defined as:_
\[SR(f)=\frac{1}{|S|}\sum_{x\in S}r_{f}(x). \tag{5}\]
### Overview of our method
In Section 3, we propose an algorithm that constructs an ABDD whose training error is small with respect to a given sample of some target Boolean function. Our algorithm is based on boosting, which is an effective approach in machine learning that constructs a more accurate classifier by combining "slightly accurate" classifiers. In Section 4, we apply our boosting algorithm for finding an equivalent ABDD with a given linear threshold function. Under a natural assumption that the linear threshold function has a large "margin", we show the size of the resulting ABDD is small. In Section 5, we show how to convert a given BNN to an equivalent Boolean expression suitable for verification tasks. More precisely, (i) Each neural unit is converted to an equivalent ABDD by applying our boosting algorithm, (ii) each ABDD is further converted to a Boolean circuit, and (iii) all circuits are combined into the final circuit, which is equivalent to the given BNN. Furthermore, for a particular verification task, we convert the final circuit to an equivalent sentential decision diagram (SDD).
## 3 Boosting
### Problem Setting
Boosting is an approach to constructing a strongly accurate classifier by combining weakly accurate classifiers. We assume some unknown target function \(f:\left\{-1,1\right\}^{n}\rightarrow\left\{-1,1\right\}\). Given a sample \(S=((x_{1},f(x_{1})),\ldots,(x_{m},f(x_{m})))\in(\{-1,1\}^{n}\times\{-1,1\})^ {m}\) of \(m\) instances labeled by \(f\) and a precision parameter \(\varepsilon\), we want to find a classifier \(g:\left\{-1,1\right\}^{n}\rightarrow\left\{-1,1\right\}\) such that its training error \(\Pr_{U}\{g(x)\neq f(x)\}\leq\varepsilon\), where
Figure 1: Examples of BDD (A), OBDD (B) and ABDD (C). To express the same linear threshold function in BDD form: nodes at the same depth can be labeled by different variables, which means that the number of variables does not limit the depth of BDD; in OBDD form: the nodes at the same depth are all labeled by the same variables, which results in the depth of OBDD is smaller than the number of variables; in ABDD form: nodes at the same depth are labeled by the same variables, and the depth of ABDD only depends on the reduction of entropy to 0 in our algorithm.
is the uniform distribution over \(S\). We are also given a set \(\mathcal{H}\) of base classifiers from \(\{-1,1\}^{n}\) to \(\{-1,1\}\). We assume the following assumption which is standard in the boosting literature [Mansour and McAllester(2002)].
**Definition 4**.: _(Weak Hypotheses Assumption (WHA)) A hypothesis set \(\mathcal{H}\) satisfies \(\gamma\)-Weak Hypothesis Assumption (WHA) for the target function \(f:\{-1,1\}^{n}\to\{-1,1\}\) if for any distribution \(d\) over \(\{-1,1\}^{n}\), there exists \(h\in\mathcal{H}\) such that \(edge_{d,f}(h)\triangleq\sum_{x\in\{-1,1\}^{n}}d_{x}f(x)h(x)\geq\gamma\)._
Intuitively, WHA ensures the set of \(\mathcal{H}\) of hypotheses and \(f\) are "weakly" related to each other. The edge function \(edge_{d,f}(h)\) takes values in \([-1,1]\) and equals to \(1\) if \(f=h\). Under WHA, we combine hypotheses of \(\mathcal{H}\) into a final hypothesis \(h_{T}\) represented by an ABDD \(T\).
Our analysis is based on a generalized version of entropy [Kearns and Mansour(1999)]. A pseudo-entropy \(G:[0,1]\to[0,1]\) is defined as \(G(q)\triangleq 2\sqrt{q(1-q)}\). Like the Shannon entropy, \(G\) is concave and \(G(1/2)=1\), and \(G(0)=G(1)=0\). In particular, \(\min(q,1-q)\leq G(q)\). Then we will introduce the conditional entropy of \(f\) given a sample \(S\) and an ABDD \(T\). For each node \(u\) in \(T\), let \(p_{u}\triangleq\Pr_{U}\{x\text{ reaches }u\}\) and \(q_{u}\triangleq\Pr_{U}\{f(x)=1\mid x\text{ reaches }u\}\), respectively. Let \(N(T)\) be the set of nodes in \(V\setminus L\) whose depth is the maximum. We further assume that for each node \(u\) in \(N(T)\), the \(x\) in \(u\) is assigned to \(1\)-leaf if \(q_{u}\geq 1/2\), and is assigned to \(0\)-leaf, otherwise. Then, observe that \(\Pr_{U}\{f(x)\neq h_{T}(x)\}=\sum_{u\in N(T)}p_{u}\min(q_{u},1-q_{u})\). The conditional entropy of \(f\) given an ABDD \(T\) with respect to the distribution \(U\) is defined as
\[H_{U}(f|T)=\sum_{u\in N(T)}p_{u}G(q_{u}). \tag{6}\]
Then the conditional entropy gives an upper bound of the training error as follows.
**Proposition 1**.: \(\Pr_{U}\{h_{T}(x)\neq f(x)\}\leqslant H_{U}(f|T)\)_._
Therefore, it is sufficient to find an ABDD \(T\) whose conditional entropy \(H_{U}(f|T)\) is less than \(\varepsilon\).
We will further use the following notations and definitions. Given an ABDD \(T\), \(S\) and \(u\in N(T)\), let \(S_{u}=\{(x,y)\in S\mid x\text{ reaches }u\}\). The entropy \(H_{d}(f)\) of \(f:\{-1,1\}^{n}\to\{-1,1\}\) with respect to a distribution \(d\) over \(\{-1,1\}^{n}\) is defined as \(H_{d}(f)\triangleq G(q)\), where \(q=\Pr_{d}\{f(x)=1\}\). The conditional entropy \(H_{d}(f|h)\) of \(f\) given \(h:\{-1,1\}^{n}\to\{-1,1\}\) with respect to \(d\) is defined as \(H_{d}(f|h)=\Pr_{d}\{h(x)=1\}G(q^{+})+\Pr_{d}\{h(x)=-1\}G(q^{-})\), where \(q^{\pm}=\Pr_{d}\{f(x)=1\mid h(x)=\pm 1\}\), repectively.
### Our Boosting Algorithm
Our algorithm is a modification of the boosting algorithm proposed by Mansour and McAllester [Mansour and McAllester(2002)]. Both algorithms learn Boolean functions in the form of BDDs in a top-down manner. The difference between our algorithm and Mansour and McAllester's algorithm lies in the construction of the final Boolean function, where ours utilizes ABDDs, while Mansour and McAllester's algorithm does not. Although this change may appear subtle, it necessitates a new criterion for selecting hypotheses in \(\mathcal{H}\) and demonstrates improved results in our experiments.
Our boosting algorithm iteratively grows an ABDD by adding a new layer at the bottom. More precisely, at each iteration \(k\), given the current ABDD \(T_{k}\), the algorithm performs the following two consecutive processes (as illustrated in Figure 2).
**Split:**: It chooses a hypothesis \(h_{k}\in\mathcal{H}\) using some criterion and adds two child nodes for each node in \(N(T_{k})\) in the next layer, where each child corresponds to \(\pm 1\) values of \(h_{k}\). Let \(T_{k}^{\prime}\) be the resulting DD.
**Merge:**: It merges nodes in \(N(T_{k}^{\prime})\) according to some rule and let \(T_{k+1}\) be the ABDD after the merge process.
The full description of the algorithm is given in Algorithm 1 and 2, respectively. For the split process, it chooses the hypothesis \(h_{k}\) maximizing the edge \(edge_{\hat{d},f}(h)\) with respect to the distribution \(\hat{d}\) specified in (9). For the merge process, we use the same way in the algorithm of Mansour and McAllester [Mansour and McAllester(2002)].
**Definition 5**.: _[_Mansour and McAllester(2002)_]_ _For \(\delta\) and \(\lambda\) (\(0<\delta,\lambda<1\)), a (\(\delta,\lambda\))-net \(\mathcal{I}\) is defined as a set of intervals \([v_{0},v_{1}],[v_{1},v_{2}],\ldots,[v_{w-1},v_{w}]\) such that (i) \(v_{0}=0\), \(v_{w}=1\), (ii) for any \(I_{k}=[v_{k-1},v_{k}]\) and \(q\in I_{k}\), \(\max_{q^{\prime}\in I_{k}}G(q^{\prime})\leq\max\{\delta,(1+\lambda)G(q)\}\)._
Mansour and McAllester showed a simple construction of \((\delta,\lambda)\)-net with length \(w=O((1/\lambda)\ln(1/\delta))\)[Mansour and McAllester(2002)] and we omit the details. Our algorithm uses particular \((\delta,\lambda)\)-nets for merging nodes.
### Analyses
\[H_{d}(Y|T,h)=\sum_{u\in N(T)}(p_{u^{+}}G(q_{u^{+}})+p_{u^{-}}G(q_{u^{-}})) \tag{7}\]
where \(p_{u^{+}}=Pr_{d}\{x\text{ reaches }u|h(x)=1\}\) and \(q_{u^{+}}=Pr_{d}(f(x)=1|x\text{ reaches }u\), \(h(x)=1)\). \(p_{u^{-}}\) and \(q_{u^{-}}\) are defined similarly.
The following lemmas prove an effective weak learning algorithm under a variant balanced distribution \(\bar{d}\), and it also can be achieved under any distribution. For \((x,y)\in S\):
\[\bar{d}_{(x,y)}=\frac{d_{(x,y)}}{2\sum_{(x^{\prime},y)\in S}d_{(x^{\prime},y)}} \tag{8}\]
**Lemma 1**.: _[_7_]_ _Let \(d\) and \(\bar{d}\) be any distribution over \(\{-1,1\}^{n}\) and its balanced distribution with respect to some \(f:\{-1,1\}^{n}\rightarrow\{-1,1\}\), respectively. Then, for any hypothesis with \(edge_{\bar{d},f}(h)\geq\gamma\), \(H_{d}(f|h)\leqslant(1-\gamma^{2}/2)H_{d}(f)\)._
Based on Lemma 1, we establish the connection between \(\gamma\) and entropy function, and have a \(\gamma^{*}\in(0,1)\) at each depth to reflect the entropy change under our algorithm.
**Lemma 2**.: _Let \(\hat{d}\) be the distribution over \(S\) specified in (9) when \(T_{k}\), \(\mathcal{H}\) and \(S\) is given by Algorithm 2 and let \(h_{k}\) be the output. If \(\mathcal{H}\) satisfies \(\gamma\)-WHA, then the conditional entropy of \(f\) with respect to the distribution \(U\) over \(S\) given \(T_{k}\) and \(h_{k}\) is bounded as \(H_{U}(f|h_{k},T_{k})\leqslant(1-\gamma^{2}/2)H_{U}(f|T_{k})\)._
**Lemma 3**.: _([11]) Assume that, before the merge process in Algorithm 1, \(H_{U}(f|h_{k},T_{k})\leq(1-\gamma)H_{U}(f|T_{k})\) for some \(\lambda\) (\(0<\lambda<1\)). Then, by merging based on the \((\delta,\eta)\)-net with \(\delta=(\lambda/6)H_{U}(f|T_{k})\) and \(\eta=\lambda/3\), the conditional entropy of \(f\) with respect to the distribution \(U\) over \(S\) given \(T_{k+1}\) is bounded as \(H_{U}(f|T_{k+1})\leqslant(1-\lambda/2)H_{U}(f|T_{k})\), where the width of \(T_{k+1}\) is \(O((1/\lambda)(\ln(1/\lambda)+\ln(1/\varepsilon)))\), provided that \(H_{U}(f|T_{k})>\varepsilon\)._
```
0: a sample \(S\in(\{-1,1\}^{n}\times(-1,1))^{m}\) of \(m\) instances by \(f\), and a set \(\mathcal{H}\) of of hypotheses, and precision parameter \(\varepsilon\) (\(0<\varepsilon<1\));
0: ABDD \(T\);
1: initialization: \(T_{1}\) is the ABDD with a root and \(0,1\)-leaves, \(k=1\).
2:repeat
3: (Split) Let \(h_{k}=Split(T,\mathcal{H},S)\) and add child nodes with each node in \(N(T_{k})\). Let \(T_{k}^{\prime}\) be the resulting ABDD.
4:for\(u\in N(T_{k}^{\prime})\)do
5: merge \(u\) to the \(0\)-leaf (the \(1\) leaf) if \(q_{u}=0\) (\(q_{u}=1\), resp.).
6:endfor
7: (Merge) Construct a \((\hat{\delta},\hat{\lambda}/3)\)-net \(\mathcal{I}_{k}\) with
8:\(\hat{\lambda}=1-\frac{H_{U}(f|T_{k},h_{k})}{H_{U}(f|T_{k})}\), and \(\hat{\delta}=\frac{\hat{\lambda}H_{U}(f|T_{k})}{6}\).
9:for\(I\in\mathcal{I}_{k}\)do
10: merge all nodes \(u\in N(T_{k}^{\prime})\) such that \(q_{u}\in I\).
11:endfor
12: Let \(T_{k+1}\) be the resulting ABDD and update \(k\gets k+1\).
13:until\(H_{U}(f|T_{k})<\varepsilon\)
14: Output \(T=T_{k}\).
```
**Algorithm 1** ABDD Boosting
Now we are ready to show our main theorem.
**Theorem 4**.: _Given a sample \(S\) of \(m\) instances labeled by \(f\), and a set \(\mathcal{H}\) of hypotheses satisfying \(\gamma\)-WHA, Algorithm 1 outputs an ABDD \(T\) such that \(\Pr_{U}\{h_{T}(x)\neq f(x)\}\leq\varepsilon\). The size of \(T\) is \(O((\ln(1/\varepsilon)/\gamma^{4})(\ln(1/\varepsilon)+\ln(1/\gamma)))\) and the running time of the algorithm is \(poly(1/\gamma,n)m\)._
## 4 ABDD Construction
We now apply the ABDD Boosting algorithm developed in the previous section to a given linear threshold function \(f\) to obtain an ABDD representation \(\bar{T}\) for \(f\). In particular, we show that the size of \(T\) is small when \(f\) has a large margin.
To be more specific, assume that we are given a linear threshold function \(f:\{-1,1\}^{n}\rightarrow\{-1,1\}\) of the form
\[f(x)=\sigma(w\cdot x+b)\]
for some weight vector \(w\in\mathbb{R}^{n}\) and bias \(b\in\mathbb{R}\), where \(\sigma\) is the step function, i.e., \(\sigma(z)\) is \(1\) if \(z\geq 0\) and \(-1\) otherwise. Note that there are infinitely many \((w,b)\) inducing the same function \(f\). We define the margin \(\rho\) of \(f\) as the maximum margin \(f(x)(w\cdot x+b)/\|w\|_{1}\) over all \((w,b)\). We let our hypothesis set \(\mathcal{H}\) consist of projection functions, namely, \(\mathcal{H}=\{h_{1},h_{2},\ldots,h_{n},h_{n+1},h_{n+2},\ldots,h_{2n}\}\), where \(h_{i}:x\mapsto x_{i}\) if \(i\leq n\) and \(h_{i}:x\mapsto-x_{i}\) otherwise, so that we can represent \(f\) as \(f(x)=\sigma(\sum_{i=1}^{2n}w_{i}h_{i}(x)+b)\) for some non-negative \(2n\)-dimensional weight vector \(w\geq 0\) and bias \(b\). Then, we can represent the margin \(\rho\) of \(f\) as the solution of the optimal solution for the following LP problem:
\[\max_{w,b,\rho}\rho\] (10) s.t. \[f(x)(\sum_{i=1}^{2n}w_{i}h_{i}(x)+b)\geq\rho\text{ for any }x\in\{-1,1 \}^{n},\] \[w\geq 0,\] \[\sum_{i}w_{i}=1.\]
Now we show that \(\mathcal{H}\) actually satisfies \(\rho\)-WHA for \(f\).
Figure 2: Illustration of our boosting algorithm. The blue dotted line part represents the process of merging the split temporary nodes into new nodes by searching the equivalence space that does not appear in the ABDD. Following algorithm 1, we find some hypothesis \(h\) to construct child nodes in the new layer and then merge nodes afterward.
**Lemma 5**.: _Let \(f\) be a linear threshold function with margin \(\rho\). Then \(\mathcal{H}\) satisfies \(\rho\)-WHA._
By the lemma and Theorem 4, we immediately have the following corollary.
**Corollary 1**.: _Let \(f\) be a linear threshold function with margin \(\rho\). Applying the ABDD Boosting algorithm with the sample \(S=\{(x,f(x))\mid x\in\{-1,1\}^{n}\}\) of all (2\({}^{n}\)) instances, our hypothesis set \(\mathcal{H}\), and the precision parameter \(\varepsilon=1/2^{n}\), we obtain an ABDD \(T\), which is equivalent to \(f\) of size \(O\left(\frac{n}{\rho^{1}}(n+\ln(1/\rho))\right)\)._
## 5 Circuit and SDD Construction
Since we have a method to convert linear threshold functions into a DD representation, the next step is to connect them according to the structure of the NN to form an equivalent \((\vee,\wedge,\neg)\)-circuit, which is used to verify SR. Subsequently, we can convert the circuit into an SDD, which is used to verify robustness.
The conversion process from a DD to a circuit is performed in a top-down manner. As shown in Figure 3, the region enclosed by the red dashed box represents a conversion unit. This unit converts a node \(x_{1}\) and its two edges in the DD into four gates and a variable in the circuit. As the next node \(x_{2}\) is connected to a different edge of node \(x_{1}\), we generate a new circuit beginning with an 'or'-gate and connect it to the 'and'-gate obtained from the conversion of an edge associated with \(x_{1}\). This process is repeated until all edges of the nodes reach either the 0-leaf or the 1-leaf. The resulting circuit is equivalent to the DD and its corresponding neural unit. To construct the NN's equivalent circuit, we utilize Algorithm 1 to generate DDs for each neuron in the NN. These DDs are then converted into circuit form. We combine these equivalent circuits based on the structure of the NN, specifically establishing a one-to-one correspondence between the inputs and outputs of each neuron in the NN and the inputs and outputs of the circuit. This completes the construction of the NN's equivalent circuit. Such a method is also described in [14, 15].
Once we have the equivalent circuit of a neural network (NN), the subsequent step is to convert it into an SDD. SDD is a subclass of deterministic Decomposable Negation Normal Form (d-DNNF) circuits that assert a stronger decomposability and a more robust form of determinism [1]. The class of SDDs generalizes that of OBDDs in that, every OBDD can be turned into an SDD in linear time. In contrast, some Boolean functions have polynomial-size SDD representations but only exponential-size OBDD representations [1]. In SDD, Boolean functions are represented through the introduction of "decision (\(\vee\)) nodes" and "conjunction (\(\wedge\)) nodes."
Indeed, the size complexity of each \(apply\) operation between two SDD nodes is proportional to the product of their internal nodes. Consequently, as the complexity of the circuit increases, the construction of the corresponding SDD requires more space. Considering the conversion process according to ABDD, if the resulting circuit is smaller, it follows that the corresponding SDD constructed from it will also be smaller in size. The size reduction in the circuit conversion directly influences the size of the resulting SDD. Therefore, by optimizing the BDD/circuit representation, we can achieve a more compact SDD.
Figure 3: Example of converting BDD/OBDD/ABDD (Left) to the circuit (Right).
## 6 Experiments
### Experimental Setup
We use the USPS digits dataset of hand-written digits, consisting of \(16\times 16\) binary pixel images. We use the data with labels 0 and 1 for verification of SR.The NN design we used is similar to [21], which has two convolution layers and a full-connected layer. In training, we use two real-valued-weight convolutional layers (kernel size 3, stride 2; kernel size 2, stride 2) with a sigmoid function and a real-valued-weight fully-connected layer in the NN. In testing, the sigmoid function is replaced with the step activation function mentioned before. The experiments are conducted using a CPU of Intel(R) Xeon(R) Gold 2.60GHz. The batch size equals 32 and the utilized learning rate is 0.01 decaying to 10% at half and three-quarters of all learning epochs. Here, the Stochastic Gradient Descent (SGD) optimizer with a momentum of 0.9 and a weight decay of 0.0001 is used.
### Sample-based Robustness (SR) validation
The SR of a standard NN whose output is real-valued is achieved simply by querying the pixels that affect the recognition the most and flipping them until the recognition result changes. Nevertheless, on the standard methods, for a BNN that has input size \(h\times w\), the time cost to compute robustness \(k\) is \(O((hw)^{k})\), which is difficult to use in BNNs.
Since BDD, OBDD, and ABDD are easily represented as a circuit, as shown in Figure 3. The circuits of several neural units are linked into a circuit \(f\) that represents the entire NN according to its network structure. Note that we separately verify the SR of the OBDDs generated by the methods based on Theorem 1 and Theorem 2 in [21], Shih, Darwiche, and Choi] using a circuit representation. The integration of weights and bias in Theorem 2 is performed as follows.
For each \(w\) in weight \(W\) and bias \(b\), We set \(\alpha=\max\{|w_{1}|,\ldots,|w_{n}|,|b|\}\), then we turn them to integer weight \(\hat{w}_{i}=\lfloor\frac{10^{p}}{\alpha}w_{i}\rfloor\) and bias \(\hat{b}=\lfloor\frac{10^{p}}{\alpha}b\rfloor\) where \(p\) is the number of digits of precision.
As for \(dis()\) in Definition 1, it's easy to express the \(k\leq dis(x,x^{\prime})\) between \(x\) and \(x^{\prime}\) in a circuit form and likewise convert it to circuit \(g_{k,x}\) denoted as follows:
\[g_{k,x}(x^{\prime})=\begin{cases}1,&if\mid x\oplus x^{\prime}\mid\leq k\\ -1,&otherwise\end{cases} \tag{11}\]
We calculate the SR of \(f\) on (positive and negative) instance \(x\), where have \(f(x)=1\) and \(f(x)=-1\), by running Algorithm 3 of BDD, OBDD and ABDD on 10 CNNs, the results as shown in Table 2.Note that the SR of negative instances can be computed by invoking Algorithm 3 on function \(\neg f\).
```
1:circuit \(f\), (positive) instance \(x\in\{-1,1\}^{n}\);
2:\(r_{f}(x)\);
3:initial: \(r_{f}=0\);
4:for\(k=1\) to n do
5:if\(g_{k,x}\wedge\overline{f}\) is satisfiable then
6: break
7:endif
8:endfor
9:return\(k\)
```
**Algorithm 3**\(SR\)
In table 2, (Shi.1) circuits are generated using the Theorem 1 proposed in [21, 22], while (MM.) circuits are generated using the method described in [21], and (Shi.2 \(p\)) circuits are generated using the Theorem 2 proposed in [21], Shih, Darwiche, and Choi] under the number of digits of precision \(p=2,3,4\). As observed, the number of gates of (Shi.2 \(2,3,4\)) are significantly larger than the others, it also consumes more time during SR validation (over 10 hours).
### Analysis
We train a standard CNN and show 99.73% accuracy on the test set. Then we replace the sigmoid function with the step activation function, and the accuracy of the CNN dropped to 99.22%. To represent BCNN, compare to BDD and OBDD, our algorithm can generate smaller ABDD while keeping the same recognition accuracy. It provides certain advantages in later work. Note that since [21]Shi et al.(2020)Shi et al.
same variable labels the nodes at the same depth, and the depth is not limited to the number of dimensions of the variable. Experimental results show that ABDD can be connected in various forms to express NN and can be used to implement various verification tasks efficiently. Since our method can be used to generate a smaller equivalent circuit of the NN, it can be applied in tasks such as hardware-based transformations of NNs. In the future, we aim to extend our method to more complex NNs.
|
2306.11264 | GraphGLOW: Universal and Generalizable Structure Learning for Graph
Neural Networks | Graph structure learning is a well-established problem that aims at
optimizing graph structures adaptive to specific graph datasets to help message
passing neural networks (i.e., GNNs) to yield effective and robust node
embeddings. However, the common limitation of existing models lies in the
underlying \textit{closed-world assumption}: the testing graph is the same as
the training graph. This premise requires independently training the structure
learning model from scratch for each graph dataset, which leads to prohibitive
computation costs and potential risks for serious over-fitting. To mitigate
these issues, this paper explores a new direction that moves forward to learn a
universal structure learning model that can generalize across graph datasets in
an open world. We first introduce the mathematical definition of this novel
problem setting, and describe the model formulation from a probabilistic
data-generative aspect. Then we devise a general framework that coordinates a
single graph-shared structure learner and multiple graph-specific GNNs to
capture the generalizable patterns of optimal message-passing topology across
datasets. The well-trained structure learner can directly produce adaptive
structures for unseen target graphs without any fine-tuning. Across diverse
datasets and various challenging cross-graph generalization protocols, our
experiments show that even without training on target graphs, the proposed
model i) significantly outperforms expressive GNNs trained on input
(non-optimized) topology, and ii) surprisingly performs on par with
state-of-the-art models that independently optimize adaptive structures for
specific target graphs, with notably orders-of-magnitude acceleration for
training on the target graph. | Wentao Zhao, Qitian Wu, Chenxiao Yang, Junchi Yan | 2023-06-20T03:33:22Z | http://arxiv.org/abs/2306.11264v1 | # GraphGLOW: Universal and Generalizable Structure Learning for Graph Neural Networks
###### Abstract.
Graph structure learning is a well-established problem that aims at optimizing graph structures adaptive to specific graph datasets to help message passing neural networks (i.e., GNNs) to yield effective and robust node embeddings. However, the common limitation of existing models lies in the underlying _closed-world assumption_: the testing graph is the same as the training graph. This premise requires independently training the structure learning model from scratch for each graph dataset, which leads to prohibitive computation costs and potential risks for serious over-fitting. To mitigate these issues, this paper explores a new direction that moves forward to learn a universal structure learning model that can generalize across graph datasets in an open world. We first introduce the mathematical definition of this novel problem setting, and describe the model formulation from a probabilistic data-generative aspect. Then we devise a general framework that coordinates a single graph-shared structure learner and multiple graph-specific GNNs to capture the generalizable patterns of optimal message-passing topology across datasets. The well-trained structure learner can directly produce adaptive structures for unseen target graphs without any fine-tuning. Across diverse datasets and various challenging cross-graph generalization protocols, our experiments show that even without training on target graphs, the proposed model i) significantly outperforms expressive GNNs trained on input (non-optimized) topology, and ii) surprisingly performs on par with state-of-the-art models that independently optimize adaptive structures for specific target graphs, with notably orders-of-magnitude acceleration for training on the target graph.
2018 acmcopyrightrightmargin=5pt plus 1pt minus 1pt
as a bi-level optimization target that jointly learns a single dataset-shared structure learner and multiple dataset-specific GNNs tailored for particular graph datasets, as shown in Fig. 1. Under such a framework, the well-trained structure learner can leverage the common transferrable knowledge across datasets for enhancing generalization and more critically, be readily utilized to yield adaptive message-passing topology for arbitrarily given target graphs.
With the guidance of the aforementioned general goal, we propose GraphGLOW (short for A Graph Structure Learning Model for Open-World Generalization) that aims at learning the generalizable patterns of optimal message-passing topology across source graphs. Specifically, we first take a bottom-up perspective and formulate the generative process for observed data in a probabilistic manner. On top of this, we derive a tractable and feasible learning objective through the lens of variational inference. The structure learner is specified as a multi-head weighted similarity function so as to guarantee enough expressivity for accommodating diverse structural information, and we further harness an approximation scheme to reduce the quadratic complexity overhead of learning potential edges from arbitrary node pairs.
To reasonably and comprehensively evaluate the model, we devise experiments with a diverse set of protocols that can measure the generalization ability under different difficulty levels (according to the intensity of distribution shifts between source graphs and target graphs). Concretely, we consider: 1) In-domain generalization, in which we generalize from some citation (social) networks to other citation (social) networks. 2) Cross-domain networks generalization between citation and social networks. The results, which are consistent across various combinations of source and target graph datasets, demonstrate that when evaluated on the target graphs, our approach i) consistently outperforms directly training the GNN counterpart on original non-optimized graph structures of the target datasets and ii) performs on par with state-of-the-art structure learning methods (Golovolovolovolov et al., 2012; Golovolovolov and LeCun, 2015; Golovolovolov and LeCun, 2015) trained on target graphs from scratch with up to 25\(\times\) less training time consumed. Our code is available at [https://github.com/WtaoZhao/GraphGLOW](https://github.com/WtaoZhao/GraphGLOW).
## 2. Preliminary and Problem Definition
**Node-Level Predictive Tasks.** Denote a graph with \(N\) nodes as \(\mathcal{G}=(\mathbf{A},\mathbf{X},\mathbf{Y})\) where \(\mathbf{A}=\{a_{up}\}_{N\times N}\) is an adjacency matrix (\(a_{up}=1\) means the edge between node \(u\) and \(o\) exists and \(0\) otherwise), \(\mathbf{X}=\{\mathbf{x}_{u}\}_{N\times D}\) is a feature matrix with \(\mathbf{x}_{u}\) a \(D\)-dimensional node feature vector of node \(u\), and \(\mathbf{Y}=\{y_{u}\}_{N\times C}\) with \(y_{u}\) the label vector of node \(u\) and \(C\) class number. The node labels are partially observed as training data, based on which the node-level prediction aims to predict the unobserved labels for testing nodes in the graph using node features and graph structures. The latter is often achieved via a GNN model, denoted as \(h_{w}\), that yields predicted node labels \(\hat{\mathbf{Y}}=h_{w}(\mathbf{A},\mathbf{X})\) and is optimized with the classification loss \(w^{*}=\arg\min_{w}=\mathcal{L}(\hat{\mathbf{Y}},\mathbf{Y})\) using observed labels from training nodes.
**Closed-World Graph Structure Learning (GLCW).** The standard graph structure learning for node-level predictive tasks trains a graph structure learner \(g_{\theta}\) to refine the given structure, i.e., \(\hat{\mathbf{A}}=g_{\theta}(\mathbf{A},\mathbf{X})\), over which the GNN classifier \(h_{w}\) conducts message passing for producing node representations and predictions. The \(g_{\theta}\) is expected to produce optimal graph structures that can give rise to satisfactory downstream classification performance of the GNN classifier. Formally speaking, the goal for training \(g_{\theta}\) along with \(h_{w}\) can be expressed as a nested optimization problem:
\[\theta^{*}=\arg\min_{w}\min_{\theta}\mathcal{L}\left(h_{w}(g_{\theta}(\mathbf{ A},\mathbf{X}),\mathbf{X}),\mathbf{Y}\right). \tag{1}\]
The above formulation of graph structure learning under closed-world assumptions constrains the training and testing nodes in the same graph, which requires \(g_{\theta}\) to be trained from scratch on each graph dataset. Since \(g_{\theta}\) is often much more complicated (e.g., with orders-of-magnitude more trainable parameters) and difficult for optimization (due to the bi-level optimization (1)) than the GNN \(h_{w}\), the GLCW would lead to undesired inefficiency and vulnerability for serious over-fitting (due to limited labeled information).
**Open-World Graph Structure Learning (GLOW).** In this work, we turn to a new learning paradigm that generalizes graph structure learning to open-world assumptions, borrowing the concepts of domain generalization (Sutskever et al., 2017) and out-of-distribution generalization (Sutskever et al., 2017), more broadly. Specifically, assume that we are given multiple source graphs, denoted as \(\{g_{m}^{s}\}_{m=1}^{M}=\{(\mathbf{A}_{m}^{s},\mathbf{X}_{m}^{s},\mathbf{Y}_{ m}^{s})\}_{m=1}^{M}\), and a target graph \(\mathcal{G}^{t}=(\mathbf{A}^{t},\mathbf{X}^{t},\mathbf{Y}^{t})\), whose distribution is often different from any source graph. The goal is to train a universal structure learner \(g_{\theta}\) on source graphs which can be directly used for inference on the target graph without any re-training or fine-tuning. The trained structure learner is expected to produce desired graph structures that can bring up better downstream classification of a GNN classifier optimized for the target graph.
More specifically, we consider a one-to-many framework that coordinates a shared graph structure learner \(g_{\theta}\) and multiple dataset-specific GNNs \(\{h_{w_{m}}\}_{m=1}^{M}\), where \(h_{w_{m}}\) with independent parameterization \(w_{m}\) is optimized for a given source graph \(\mathcal{G}_{m}^{s}\). With the aim of learning a universal \(g_{\theta}\) that can generalize to new unseen target graphs, our training goal can be formulated as the following bi-level optimization problem:
\[\theta^{*}=\arg\min_{\theta}\min_{w_{1},\cdots,w_{M}}\sum_{m=1}^{M}\mathcal{L} \left(h_{w_{m}}(g_{\theta}(\mathbf{A}_{m}^{s},\mathbf{X}_{m}^{s}),\mathbf{X}_{ m}^{s}),\mathbf{Y}_{m}^{s}\right), \tag{2}\]
Figure 1. Illustration of Open-World Graph Structure Learning. In a diverse set of source graphs, we train multiple dataset-specific GNNs and a shared structure learner. In the target graph, we directly utilize the learned structure learner and only need to train a new GNN.
where the inner optimization is a multi-task learning objective. Generally, (2) aims at finding an optimal \(g_{\theta}\) that can jointly minimize the classification loss induced by \(M\) GNN models, each trained for a particular source graph. After training, we can directly adapt \(g_{\theta^{*}}\) to the target graph for testing purpose, and only need to train a GNN \(h_{w}\) on the target graph:
\[w^{*}=\operatorname*{arg\,min}_{w}\mathcal{L}\left(h_{w}(g_{\theta^{*}}( \mathbf{A}^{t},\mathbf{X}^{t}),\mathbf{X}^{t}),\mathbf{Y}^{t}\right). \tag{3}\]
## 3. Proposed Model
To handle the above problem, we present an end-to-end learning framework GraphGLOW that guides the central graph structure learner to learn adaptive message-passing structures exploited by multiple GNNs. The overview of GraphGLOW is shown in Fig. 2.
The fundamental challenge of GLOW lies in how to model and capture the generalizable patterns among adaptive structures of different graphs. To this end, we first take a data-generative perspective that treats the inputs and inter-mediate results as random variables and investigate into their dependency, based on which we present the high-level model formulation in a probabilistic form (Sec. 3.1). Then we proceed to instantiate the model components (Sec. 3.2). Finally, we discuss differentiable training approaches for optimization (Sec. 3.3).
### Model Formulation
To commence, we characterize the data generation process by a latent variable model, based on which we derive the formulation of our method. We treat the latent graph \(\hat{\mathbf{A}}\) (given by \(g_{\theta}\)) as a latent variable whose prior distribution is given by \(p(\hat{\mathbf{A}}|\mathbf{A},\mathbf{X})\). The prior distribution reflects how one presumed on the latent structures before observed labels arrive. Then, the prediction is given by a predictive distribution \(p(\mathbf{Y}|\hat{\mathbf{A}},\mathbf{X})\). The learning objective aims at maximizing the log-likelihood of observed labels, which can be written as: \(\log p(\mathbf{Y}|\mathbf{A},\mathbf{X})=\log\int_{\hat{\mathbf{A}}}p(\mathbf{ Y}|\mathbf{A},\mathbf{X},\hat{\mathbf{A}})p(\hat{\mathbf{A}}|\mathbf{A}, \mathbf{X})d\hat{\mathbf{A}}\). To estimate latent graphs that could enhance message passing for downstream tasks, one plausible way is to sample from the posterior, i.e., \(p(\hat{\mathbf{A}}|\mathbf{Y},\mathbf{A},\mathbf{X})\), conditioned on the labels from downstream tasks. Using the Bayes' rule, we have
\[p(\hat{\mathbf{A}}|\mathbf{Y},\mathbf{A},\mathbf{X})=\frac{p(\mathbf{Y}| \mathbf{A},\mathbf{X},\hat{\mathbf{A}})p(\hat{\mathbf{A}}|\mathbf{A},\mathbf{X })}{\hat{\mathbf{J}}_{\hat{\mathbf{A}}}p(\mathbf{Y}|\mathbf{A},\mathbf{X}, \hat{\mathbf{A}})p(\hat{\mathbf{A}}|\mathbf{A},\mathbf{X})d\hat{\mathbf{A}}}. \tag{4}\]
However, the integration over \(\hat{\mathbf{A}}\) in the denominator is intractable for computation due to the exponentially large space of \(\hat{\mathbf{A}}\).
To circumvent the difficulty, we can introduce a variational distribution \(q(\hat{\mathbf{A}}|\mathbf{A},\mathbf{X})\) over \(\hat{\mathbf{A}}\) as an approximation to \(p(\hat{\mathbf{A}}|\mathbf{Y},\mathbf{A},\mathbf{X})\). We can sample latent graphs from \(q(\hat{\mathbf{A}}|\mathbf{A},\mathbf{X})\), i.e., instantiate it as the structure learner \(g_{\theta}\), and once \(q(\hat{\mathbf{A}}|\mathbf{A},\mathbf{X})=p(\hat{\mathbf{A}}|\mathbf{Y}, \mathbf{A},\mathbf{X})\), we could have samples from the posterior that ideally generates the optimal graph structures for downstream prediction. By this principle, we can start with minimizing the Kullback-Leibler divergence between \(q\) and \(p\) and derive the learning objective as follows:
\[\mathcal{D}_{KL}(q(\hat{\mathbf{A}}|\mathbf{A},\mathbf{X})||p( \hat{\mathbf{A}}|\mathbf{Y},\mathbf{A},\mathbf{X}))\] \[=-\underbrace{\mathbb{E}_{\hat{\mathbf{A}}-q(\hat{\mathbf{A}}| \mathbf{A},\mathbf{X})}}_{\text{Evidence Lower Bound}}+\log p(\mathbf{Y}| \mathbf{A},\mathbf{X}). \tag{5}\]
Based on this equation, we further have the inequality which bridges the relationship between the Evidence Lower Bound (ELBO)
Figure 2. Illustration of the proposed framework GraphGLOW targeting open-world graph structure learning. The middle part of the figure presents the training process for the structure learner together with multiple dataset-specific GNNs on source graphs. In (a)-(e) we illustrate the details of graph structure learner, backbone GNN, iterative training process, training procedure and transferring procedure. When the training is finished, the structure learner is fixed and we only need to train a dataset-specific GNN network on new target graph with latent structures inferred by the well-trained structure learner.
and observed data log-likelihood:
\[\log p(\mathrm{Y}|\mathbf{A},\mathbf{X})\geq\mathbb{E}_{\hat{\mathbf{A}}\sim q( \hat{\mathbf{A}}|\mathbf{A},\mathbf{X})}\left[\log\frac{p(\mathrm{Y}|\mathbf{A},\mathbf{X},\hat{\mathbf{A}})p(\hat{\mathbf{A}}|\mathbf{A},\mathbf{X})}{q( \hat{\mathbf{A}}|\mathbf{A},\mathbf{X})}\right]. \tag{6}\]
The equality holds if and only if \(\mathcal{D}_{KL}(q(\hat{\mathbf{A}}|\mathbf{A},\mathbf{X})\|p(\hat{\mathbf{A}} |\mathrm{Y},\mathbf{A},\mathbf{X}))=0\). The above fact suggests that we can optimize the ELBO as a surrogate for \(\log p(\mathrm{Y}|\mathbf{A},\mathbf{X})\) which involves the intractable integration. More importantly, when the ELBO is optimized w.r.t. \(q\) distribution, the variational bound is lifted to the original log-likelihood and one has \(q(\hat{\mathbf{A}}|\mathbf{A},\mathbf{X})=p(\hat{\mathbf{A}}|\mathrm{Y}, \mathbf{A},\mathbf{X})\), i.e., the variational distribution equals to the true posterior, which is what we expect.
Pushing further and incorporating source graphs \(\mathcal{G}_{m}\) (we omit the superscript for simplicity), we arrive at the following objective:
\[\mathbb{E}_{\mathcal{G}_{m}\sim p(\mathcal{G})}\left[\mathbb{E}_{ \hat{\mathbf{A}}\sim q_{\mathcal{G}}(\hat{\mathbf{A}}|\mathbf{A}=\mathbf{A}_{ m},\mathbf{X}=\mathbf{X}_{m})}\left[\log p_{\mathrm{w}_{m}}(\mathrm{Y}| \mathbf{A}=\mathbf{A}_{m},\mathbf{X}=\mathbf{X}_{m},\hat{\mathbf{A}})\right.\right.\] \[\left.\left.+\log p_{0}(\hat{\mathbf{A}}|\mathbf{A}=\mathbf{A}_{ m},\mathbf{X}=\mathbf{X}_{m})-\log q_{\mathcal{G}}(\hat{\mathbf{A}}|\mathbf{A}= \mathbf{A}_{m},\mathbf{X}=\mathbf{X}_{m})\right]\right].\]
Here we instantiate \(q(\hat{\mathbf{A}}|\mathbf{A},\mathbf{X})\) as the shared structure learner \(g_{\theta}\), \(p(\hat{\mathbf{A}}|\mathbf{A},\mathbf{X})\) as a (shared) non-parametric prior distribution \(p_{0}\) for latent structures, and \(p(\mathrm{Y}|\mathbf{A},\mathbf{X},\hat{\mathbf{A}})\) as the dataset-specific GNN model \(h_{\mathbf{w}_{m}}\), to suit the framework for our formulated problem in Section 2. The formulation of (7) shares the spritts with Bayesian meta learning (Hendle, 2017). We can treat the GNN training as a dataset-specific learning task and latent graph as a certain 'learning algorithm' or 'hyper-parameter', so (7) essentially aims at learning a structure learner that can yield desirable 'learning algorithm' for each specific learning task on graphs. Furthermore, the three terms in (7) have distinct effects: i) the predictive term \(\log p_{\mathrm{w}_{m}}\) acts as a supervised classification loss; ii) the prior term \(\log p_{0}\) serves for regularization on the generated structures; iii) the third term, which is essentially the entropy of \(q_{\theta}\), penalizes high confidence on certain structures.
To sum up, we can optimize (7) with joint learning of the structure learner \(g_{\theta}\) and GNN models \(\{h_{\mathbf{w}_{m}}\}_{m=1}^{M}\) on source graphs \(\{\mathcal{G}_{m}\}_{m=1}^{M}\) for training the structure learner. After that, we can generalize the well-trained \(g_{\theta^{*}}\) to estimate latent graph structures for a new target graph \(\mathcal{G}^{t}=(\mathbf{A}^{t},\mathbf{X}^{t})\) and only need to train the GNN model \(h_{\mathbf{w}}\) w.r.t. the predictive objective with fixed \(\theta^{*}\):
\[\mathbb{E}_{\hat{\mathbf{A}}\sim q_{\mathcal{G}^{t}}(\hat{\mathbf{A}}|\mathbf{ A}=\mathbf{A}^{t},\mathbf{X}=\mathbf{X}^{t})}\left[\log p_{\mathrm{w}}(\mathrm{Y}| \mathbf{A}=\mathbf{A}^{t},\mathbf{X}=\mathbf{X}^{t},\hat{\mathbf{A}})\right]. \tag{8}\]
We next discuss how to specify \(g_{\theta}\), \(h_{\mathbf{w}_{m}}\) and \(p_{0}\) with special focus on their expressiveness and efficiency in Section 3.2. Later, we present the details for loss computation and model training based on the formulation stated above in Section 3.3.
### Model Instantiations
#### 3.2.1. Instantiation for \(q_{\theta}(\hat{\mathbf{A}}|\mathbf{A},\mathbf{X})\)
The variational distribution aims at learning the conditional distribution that generates suitable latent structures for message passing based on input observations. A natural means is to assume each edge of the latent graph as a Bernoulli random variable and the distribution \(q\) is a product of \(N\times N\) independent Bernoulli random variables (Brandt, 2017; Goyal et al., 2017).
The graph structure learner \(g_{\theta}\) can be used for predicting the Bernoulli parameter matrix. To accommodate the information from node features and graph structure, we can use the node representation, denoted as \(\mathbf{z}_{u}\in\mathbb{R}^{d}\), where \(d\) is the embedding dimension, to compute the edge probability \(a_{uw}\) for edge \((u,v)\) as
\[a_{uw}=\delta\left(\frac{1}{H}\sum_{h=1}^{H}s(\mathbf{w}_{h}^{1}\odot\mathbf{z} _{u},\mathbf{w}_{h}^{2}\odot\mathbf{z}_{u})\right), \tag{9}\]
where \(s(\cdot,\cdot)\) is a similarity function for two vectors, \(\odot\) denotes Hadamard product, \(\delta\) is a function that converts the input into values within \([0,1]\) and \(\mathbf{w}_{h}^{1},\mathbf{w}_{h}^{2}\in\mathbb{R}^{d}\) are two weight vectors of the \(h\)-th head. Common choices for \(s(\cdot,\cdot)\) include simple dot-product, cosine distance (K
where \(\mathbf{W}^{(l)}\in\mathbb{R}^{d\times d}\) is a weight matrix, \(\sigma\) is non-linear activation, and \(\mathbf{D}\) denotes a diagonal degree matrix from input graph \(\mathbf{A}\) and \(\mathbf{Z}^{(l)}=\{\mathbf{z}_{u}^{(l)}\}_{N\times d}\) is a stack of node representations at the \(l\)-th layer.
With the estimated latent graph \(\hat{\mathbf{A}}=\hat{\mathbf{B}}_{1}\hat{\mathbf{B}}_{2}\), we perform message passing \(\mathrm{MP}_{2}(\cdot)\) in a two-step fashion to update node representations:
\[\text{i) node-to-pivot passing:}\;\mathbf{C}^{(l+\frac{1}{2})} =\mathrm{RowNorm}(\Gamma^{\top})\mathbf{Z}^{(l)}, \tag{12}\] \[\text{ii) pivot-to-node passing:}\;\mathbf{C}^{(l+1)} =\mathrm{RowNorm}(\Gamma)\mathbf{C}^{(l+\frac{1}{2})}, \tag{11}\]
where \(\mathbf{C}^{(l+\frac{1}{2})}\) is an intermediate node representation and \(\Gamma=\{\alpha_{uw}\}_{N\times P}\) is the node-pivot similarity matrix calculated by (9). Such a two-step procedure can be efficiently conducted within \(O(NP)\) time and space complexity.
Despite that the feature propagation on the estimated latent structure could presumably yield better node representations, the original input graph structures also contain useful information, such as effective inductive bias (Bang et al., 2017). Therefore, we integrate two message-passing functions to compute layer-wise updating for node representations:
\[\mathbf{Z}^{(l+1)}=\sigma\left(\lambda\mathrm{MP}_{1}(\mathbf{Z}^{(l)}, \mathbf{A})\mathbf{W}^{(l)}+(1-\lambda)\mathrm{MP}_{2}(\mathbf{Z}^{(l)},\hat{ \mathbf{A}})\mathbf{W}^{(l)}\right), \tag{13}\]
where \(\lambda\) is a trading hyper-parameter that controls the concentration weight on input structures. Such design could also improve the training stability by reducing the impact from large variation of latent structures through training procedure.
With \(L\) GNN layers, one can obtain the prediction \(\hat{\mathbf{Y}}\) by setting \(\hat{\mathbf{Y}}=\mathbf{Z}^{(L)}\) and \(\mathbf{W}^{(L-1)}\in\mathbb{R}^{d\times C}\) where \(C\) is the number of classes. Alg. 1 shows the feed-forward computation of message passing.
#### 3.2.3. Instantiation for \(p_{0}(\hat{\mathbf{A}}|\mathbf{A},\mathbf{X})\)
The prior distribution reflects how we presume on the latent graph structures without the information of observed labels. In other words, it characterizes how likely a given graph structure could provide enough potential for feature propagation by GNNs. The prior could be leveraged for regularization on the estimated latent graph \(\hat{\mathbf{A}}\). In this consideration, we choose the prior as an energy function that quantifies the smoothness of the graph:
\[p_{0}(\hat{\mathbf{A}}|\mathbf{X},\mathbf{A})\propto\exp\left(-\alpha\sum_{ \mathbf{z},\mathbf{z}}\hat{\mathbf{A}}_{uw}\|\mathbf{x}_{u}-\mathbf{x}_{v}\| _{2}^{2}-\rho\|\hat{\mathbf{A}}\|_{F}^{2}\right), \tag{14}\]
where \(\|\cdot\|_{F}\) is the Frobenius norm. The first term in (14) measures the smoothness of the latent graph (Bang et al., 2017), with the hypothesis that graphs with smoother feature has lower energy (i.e., higher probability). The second term helps avoiding too large node degrees (Gan et al., 2017). The hyperparameters \(\alpha\) and \(\rho\) control the strength for regularization effects.
While we can retrieve the latent graph via \(\hat{\mathbf{A}}=\hat{\mathbf{B}}_{1}\hat{\mathbf{B}}_{2}\), the computation of (14) still requires \(O(N^{2})\) cost. To reduce the overhead, we apply the regularization on the \(P\times P\) pivot-pivot adjacency matrix \(\tilde{\mathbf{E}}=\hat{\mathbf{B}}_{2}\hat{\mathbf{B}}_{1}\) as a proxy regularization:
\[\begin{split}\mathcal{R}(\hat{\mathbf{E}})&=\log p _{0}(\hat{\mathbf{A}}|\mathbf{X},\mathbf{A})\\ &\approx-\alpha\sum_{P,q}\hat{\mathbf{E}}_{pq}\|\mathbf{x}_{p}^{ \prime}-\mathbf{x}_{q}^{\prime}\|_{2}^{2}-\rho\|\hat{\mathbf{E}}\|_{F}^{2}, \end{split} \tag{15}\]
where \(\mathbf{x}_{p}^{\prime}\) denotes the input feature of the \(p\)-th pivot node.
### Model Training
For optimization with (7), we proceed to derive the loss functions and updating gradients for \(\theta\) and \(w_{m}\) based on the three terms \(\mathbb{E}_{qq}\left[\log p_{w_{m}}\right]\), \(\mathbb{E}_{qq}\left[\log p_{0}\right]\) and \(\mathbb{E}_{qq}\left[\log q_{\theta}\right]\).
#### 3.3.1. Optimization for \(\mathbb{E}_{qq}\left[\log p_{w_{m}}\right]\)
The optimization difficulty stems from the expectation over \(q_{\theta}\), where the sampling process is non-differentiable and hinders back-propagation. Common strategies for approximating the sampling for discrete random variables include Gumbel-Softmax trick (Gumbel and Softmax, 1998) and REINFORCE trick (Srivastava et al., 2017). However, both strategies yield a sparse graph structure each time of sampling, which could lead to high variance for the prediction result \(\log p_{w_{m}}(\mathbf{Y}|\mathbf{A},\mathbf{X},\hat{\mathbf{A}})\) produced by message passing over a sampled graph. To mitigate the issue, we alternatively adopt the Normalized Weighted Geometric Mean (NWGM) (Srivastava et al., 2017) to move the outer expectation into the feature-level. Specifically, we have (see Appendix A for detailed derivations)
\[\begin{split}&\nabla_{\theta}\mathbb{E}_{qq}(\hat{\mathbf{A}}| \mathbf{A},\mathbf{X})\left[\log p_{w_{m}}(\mathbf{Y}|\mathbf{A},\mathbf{X}, \hat{\mathbf{A}})\right]\\ &\approx\nabla_{\theta}\log p_{w_{m}}(\mathbf{Y}|\mathbf{A}, \mathbf{X},\hat{\mathbf{A}}=\mathbb{E}_{qq\left(\hat{\mathbf{A}}|\mathbf{X}, \mathbf{X}\right)}[\hat{\mathbf{A}}]).\end{split} \tag{16}\]
We denote the opposite of the above term as \(\nabla_{\theta}\mathcal{L}_{s}(\theta)\). The gradient w.r.t. \(w_{m}\) can be similarly derived. The above form is a biased estimation for the original objective, yet it can reduce the variance from sampling and also improve training efficiency (without the need of message passing over multiple sampled graphs).(16) induces the supervised cross-entropy loss.
#### 3.3.2. Optimization for \(\mathbb{E}_{qq}\left[\log p_{0}\right]\)
As for the second term in (7), we adopt the REINFORCE trick, i.e., policy gradient, to tackle the non-differentiability of sampling from \(q_{\theta}\). Specifically, for each feed-forward computation, we sample from the Bernoulli distribution for each edge given by the estimated node-pivot similarity matrix, i.e., \(Bernoulli(\alpha_{up})\), and obtain the sampled latent bipartite graph \(\hat{\mathbf{B}}_{1}\) and subsequently have \(\hat{\mathbf{E}}=\hat{\mathbf{B}}_{1}\hat{\mathbf{B}}_{2}=\hat{\mathbf{B}}_{1} \hat{\mathbf{B}}_{1}^{\top}\). The probability for the latent structure could be computed by
\[\pi_{\theta}(\hat{\mathbf{E}})=\prod_{u,p}\left(\hat{\mathbf{B}}_{1,up}\alpha_{up }+(1-\hat{\mathbf{B}}_{1,up})\cdot(1-\alpha_{up})\right). \tag{17}\]
Denote \(\hat{\mathbf{E}}_{k}\) as the sampled result at the \(k\)-th time, we can independently sample \(K\) times and obtain \(\{\hat{\mathbf{E}}_{k}\}_{k=1}^{K}\) and \(\{\pi_{\theta}(\hat{\mathbf{E}}_{k})\}_{k=1}^{K}\). Recall
Figure 3. Illustration for scalable structure learning message passing, which reduces algorithmic complexity from \(O(N^{2})\) to \(O(Np)\). We choose \(P\) nodes as pivots and convert the \(N\times N\) matrix to the product of two \(N\times P\) node-pivot matrices (where the message passing is executed with two steps, i.e., node-to-pivot and pivot-to-node.)
that the regularization reward from \(\log p_{0}\) has been given by (14). The policy gradient (Srivastava et al., 2017) yields the gradient of loss for \(\theta\) as
\[\begin{split}\nabla_{\theta}\mathcal{L}_{r}(\theta)&=- \nabla_{\theta}\mathbb{E}_{\hat{\mathbf{A}}-q(\hat{\mathbf{A}}|\mathbf{X}, \mathbf{A})}\left[\log p_{0}(\hat{\mathbf{A}}|\mathbf{X},\mathbf{A})\right]\\ &\approx-\nabla_{\theta}\frac{1}{K}\sum_{k=1}^{K}\log\pi_{\theta} (\hat{\mathbf{E}}_{k})\left(\mathcal{R}(\hat{\mathbf{E}}_{k})-\overline{ \mathcal{R}}\right),\end{split} \tag{17}\]
where \(\overline{\mathcal{R}}\) acts as a baseline function by averaging the regularization rewards \(\mathcal{R}(\hat{\mathbf{E}}_{k})\) in one feed-forward computation, which helps to reduce the variance during policy gradient training (Srivastava et al., 2017).
#### 3.3.3. Optimization with \(\mathbb{E}_{q_{0}}\left[\log q_{\theta}\right]\)
The last entropy term for \(q_{\theta}\) could be directly computed by
\[\begin{split}\mathcal{L}_{e}(\theta)&=\mathbb{E}_{ \hat{\mathbf{A}}-q(\hat{\mathbf{A}}|\mathbf{X},\mathbf{A})}\left[\log q(\hat{ \mathbf{A}}|\mathbf{X},\mathbf{A})\right]\\ &\approx\frac{1}{NP}\sum_{u=1}^{N}\sum_{p=1}^{P}\left[\alpha_{up }\log\alpha_{up}+(1-\alpha_{up})\log(1-\alpha_{up})\right],\end{split} \tag{18}\]
where we again adopt the node-pivot similarity matrix as a proxy for the estimated latent graph.
#### 3.3.4. Iterative Structure Learning for Acceleration
A straightforward way is to consider once structure inference and once GNN's message passing for prediction in each feed-forward computation. To enable structure learning and GNN learning mutually reinforce each other (Bengio et al., 2017), we consider multiple iterative updates of graph structures and node representations before once back-propagation. More specifically, in each epoch, we repeatedly update node representations \(\mathbf{Z}^{t}\) (where the superscript \(t\) denotes the \(t\)-th iteration) and latent graph \(\hat{\mathbf{A}}^{t}\) until a given maximum budget is achieved. To accelerate the training, we aggregate the losses \(\mathcal{L}^{t}\) in each iteration step for parameter updating. As different graphs have different feature space, we utilize the first layer of GNN as an encoder at the very beginning and then feed the encoded representations to structure learner. The training algorithm for structure learner \(g_{\theta}\) on source graphs is described in Alg. 2 (in the appendix) where we train structure learner for multiple episodes and in each episode, we train \(g_{\theta}\) on each source graph for several epochs. In testing, the well-trained \(g_{\theta}\) is fixed and we train a GNN \(h_{\mathbf{w}}\) on the target graph with latent structures inferred by \(g_{\theta}\), as described in Alg. 3.
## 4. Related Works
Graph Neural NetworksGraph neural networks (GNNs) (Garon et al., 2017; Ganin et al., 2017; Ganin et al., 2017; Ganin et al., 2017; Ganin et al., 2017; Ganin et al., 2017) have achieved impressive performances in modeling graph-structured data. Nonetheless, there is increasing evidence suggesting GNNs' deficiency for graph structures that are inconsistent with the principle of message passing. One typical situation lies in non-homophilous graphs (Srivastava et al., 2017), where adjacent nodes tend to have dissimilar features/labels. Recent studies devise adaptive feature propagation/aggregation to tackle the heterophily (Ganin et al., 2017; Ganin et al., 2017; Ganin et al., 2017; Ganin et al., 2017). Another situation stems from graphs with noisy or spurious links, for which several works propose to purify the observed structures for more robust node representations (Ganin et al., 2017; Ganin et al., 2017). Our work is related to these works by searching adaptive graph structures that is suitable for GNN's message passing. Yet, the key difference is that our method targets learning a new graph out of the scope of input one, while the above works focus on message passing within the input graph.
Graph Structure learningTo effectively address the limitations of GNNs' feature propagation within observed structures, many recent works attempt to jointly learn graph structures and the GNN model. For instance, (Ganin et al., 2017) models each edge as a Bernoulli random variable and optimizes graph structures along with the GCN. To exploit enough information from observed structure for structure learning, (Ganin et al., 2017) proposes a metric learning approach based on RBF kernel to compute edge probability with node representations, while (Ganin et al., 2017) adopts attention mechanism to achieve the similar goal. Furthermore, (Bengio et al., 2017) considers an iterative method that enables mutual reinforcement between learning graph structures and node embeddings. Also, (Ganin et al., 2017) presents a probabilistic framework that views the input graph as a random sample from a collection modeled by a parametric random graph model. (Ganin et al., 2017; Ganin et al., 2017) harnesses variational inference to estimate a posterior of graph structures and GNN parameters. While learning graph structures often requires \(O(N^{2})\) complexity, a recent work (Ganin et al., 2017) proposes an efficient Transformer that achieves latent structure learning in each layer with \(O(N)\) complexity. However, though these methods have shown promising results, they assume training nodes and testing nodes are from the same graph and consider only one graph. By contrast, we consider graph structure learning under the cross-graph setting and propose a general framework to learn a shared structure learner which can generalize to target graphs without any re-training.
Out-of-Distribution Generalization on GraphsDue to the demand for handling testing data in the wild, improving the capability of the neural networks for performing satisfactorily on out-of-distribution data has received increasing attention (Ganin et al., 2017; Ganin et al., 2017; Ganin et al., 2017; Ganin et al., 2017; Ganin et al., 2017). Recent studies, e.g., (Ganin et al., 2017; Ganin et al., 2017; Ganin et al., 2017) explore effective treatments for tackling general distribution shifts on graphs, and there are also works focusing on particular categories of distribution shifts like size generalization (Ganin et al., 2017), molecular scaffold generalization (Ganin et al., 2017), feature/attribute shifts (Ganin et al., 2017; Ganin et al., 2017), topological shifts (Ganin et al., 2017), etc. To the best of our knowledge, there is no prior works considering OOD generalization in the context of graph structure learning. In our case, the target graph, where the structure learner is expected to yield adaptive structures, can have disparate distributions than the source graphs. The distribution shifts could potentially stem from feature/label space, graph sizes or domains (e.g., from social networks to citation networks). As the first attempt along this path, our work can fill the research gap and enable the graph structure learning model to deal with new unseen graphs in an open world.
## 5. Experiments
We apply GraphGLOW to real-world datasets for node classification to test the efficacy of proposed structure learner for boosting performance of GNN learning on target graphs with distribution shifts from source graphs. We specify the backbone GNN network for GraphGLOW as a two-layer GCN (Ganin et al., 2017). We focus on the following research questions:
* **1)** How does GraphGLOW perform compared with directly training GNN models on input structure of target graphs?
\(\bullet\) 2) How does GraphGLOW perform compared to state-of-the-art structure learning models that are directly trained on target datasets in terms of both accuracy and training time?
\(\bullet\) 3) Are the proposed components of GraphGLOW effective and necessary for the achieved performance?
\(\bullet\) 4) What is the impact of hyper-parameter on performance and what is the impact of attack on observed edges?
\(\bullet\) 5) What is the property of inferred latent graphs and what generalizable pattern does the structure learner capture?
### Experimental Protocols
**Datasets.** Our experiments are conducted on several public graph datasets. First we consider three commonly used citation networks Cora, CiteSeer and PubMed. We use the same splits as in (Zhou et al., 2017). These three datasets have high homophily ratios (i.e., adjacent nodes tend to have similar labels) (Zhou et al., 2017). Apart from this, we also consider four social networks from Facebook-100 (Zhou et al., 2017), which have low homophily ratios. Readers may refer to Appendix B for more dataset information like splitting ratios.
**Competitors.** We mainly compare with GCN (Kipf and Welling, 2015), the GNN counterpart trained on input structure, for testing the efficacy of produced latent graphs by GraphGLOW. As further investigation, we also compare with other advanced GNN models: GraphSAGE (Kipf and Welling, 2015), GAT (Zhou et al., 2017), APPNP (Kipf and Welling, 2015), H\({}_{2}\)GCN (Zhou et al., 2017) and GPRGNN (Gupta et al., 2017). Here APPNP, H\({}_{2}\)GCN and GPRGNN are all strong GNN models equipped with adaptive feature propagation and high-order aggregation. For these pure GNN models, the training and testing are considered on (the same) target graphs. Furthermore, we compete GraphGLOW with state-of-the-art graph structure learning models, IDS (Kipf and Welling, 2015), IDGL (Chen et al., 2016) and VGCN (Chen et al., 2016). Since these models are all designed for training on one dataset from scratch, we directly train them on the target graph and they in principle could yield better performance than GraphGLOW.
We also consider variants of GraphGLOW as baselines. We replace the similarity function \(s\) with attention-based structure learner, denoted as GraphGLOW\({}_{\text{at}}\), which follows the same training scheme as GraphGLOW. Besides, we consider some non-parametric similarity functions like dot-product, KNN and cosine distance (denoted as GraphGLOW\({}_{\text{dp}}\), GraphGLOW\({}_{\text{kmn}}\) and GraphGLOW\({}_{\text{cos}}\), respectively). For these models, we only need to train the GNN network on target graphs with the non-parametric structure learners yielding latent structures. In addition, we introduce a variant GraphGLOW\({}^{*}\) that shares the same architecture as GraphGLOW and is directly trained on target graphs. Also, GraphGLOW\({}^{*}\) in principle could produce superior results than GraphGLOW. We report the test accuracy given by the model that produces the highest validation accuracy within 500 training epochs.
### In-domain Generalization
We first consider transferring within social networks or citation networks. The results are reported in Table 1 where for each social network (resp. citation network) as the target, we use the other social networks (resp. citation networks) as the source datasets. GraphGLOW performs consistently better than GCN, i.e., the counterpart using observed graph for message passing, which proves that GraphGLOW can capture generalizable patterns for desirable message-passing structure for unseen datasets that can indeed boost the GCN backbone's performance on downstream tasks. In particular, the improvement over GCN is over 5% on Cornell5 and Reed98, two datasets with low homophily ratios (as shown in Table 3). The reason is that for non-homophilous graphs where the message passing may propagate inconsistent signals (as mentioned in Section 1), the GNN learning could better benefits from structure learning than homophilous graphs. Furthermore, compared to other strong GNN models, GraphGLOW still achieves slight improvement than the best competitors though the backbone GCN network is less expressive. One could expect further performance gain by GraphGLOW if we specify the GNN backbone as other advanced architectures.
In contrast with non-parametric structure learning models and GraphGLOW\({}_{\text{at}}\), GraphGLOW outperforms them by a large margin throughout all cases, which verifies the superiority of our design of multi-head weighted similarity function that can accommodate multi-faceted diverse structural information. Compared with GraphGLOW\({}^{*}\), GraphGLOW performs on par with and even exceeds it on
\begin{table}
\begin{tabular}{|c|l c c c c c c c|} \hline
**Type** & **Method** & **Cornell5** & **Johns.55** & **Amherst41** & **Reed98** & **Cora** & **CiteSeer** & **PubMed** \\ \hline \multirow{8}{*}{**Pure**} & GCN & 68.6 \(\pm\) 0.5 & 70.8 \(\pm\) 1.0 & 65.8 \(\pm\) 1.6 & 60.8 \(\pm\) 1.6 & 81.6 \(\pm\) 0.4 & 71.6 \(\pm\) 0.3 & 78.8 \(\pm\) 0.6 \\ & SAGE & 68.7 \(\pm\) 0.8 & 67.5 \(\pm\) 0.9 & 66.3 \(\pm\) 1.8 & 63.9 \(\pm\) 1.9 & 81.4 \(\pm\) 0.6 & 71.6 \(\pm\) 0.5 & 78.6 \(\pm\) 0.7 \\ & GAT & 69.6 \(\pm\) 1.2 & 69.4 \(\pm\) 0.7 & 68.7 \(\pm\) 2.1 & 64.5 \(\pm\) 2.5 & 83.0 \(\pm\) 0.7 & 72.1 \(\pm\) 1.1 & 79.0 \(\pm\) 0.4 \\ & GPR & 68.8 \(\pm\) 0.7 & 69.6 \(\pm\) 1.3 & 66.2 \(\pm\) 1.5 & 62.7 \(\pm\) 2.0 & 83.1 \(\pm\) 0.7 & 72.4 \(\pm\) 0.8 & 79.6 \(\pm\) 0.5 \\ & APPNP & 68.5 \(\pm\) 0.8 & 69.1 \(\pm\) 1.4 & 65.9 \(\pm\) 1.3 & 62.3 \(\pm\) 1.5 & 82.7 \(\pm\) 0.5 & 71.9 \(\pm\) 0.5 & 79.2 \(\pm\) 0.3 \\ & H\({}_{2}\)GCN & 71.4 \(\pm\) 0.5 & 68.3 \(\pm\) 1.0 & 66.5 \(\pm\) 2.2 & 65.4 \(\pm\) 1.3 & 82.5 \(\pm\) 0.8 & 71.4 \(\pm\) 0.7 & 79.4 \(\pm\) 0.4 \\ & CPGNN & 71.1 \(\pm\) 0.5 & 68.7 \(\pm\) 1.3 & 66.7 \(\pm\) 0.8 & 63.6 \(\pm\) 1.8 & 80.8 \(\pm\) 0.4 & 71.6 \(\pm\) 0.4 & 78.5 \(\pm\) 0.7 \\ \hline \multirow{8}{*}{**Graph**} & GraphGLOW\({}_{\text{dp}}\) & 71.5 \(\pm\) 0.7 & 71.3 \(\pm\) 1.2 & 68.5 \(\pm\) 1.6 & 63.2 \(\pm\) 1.2 & 83.1 \(\pm\) 0.8 & 71.7 \(\pm\) 1.0 & 77.3 \(\pm\) 0.8 \\ & GraphGLOW\({}_{\text{kmn}}\) & 69.4 \(\pm\) 0.8 & 71.0 \(\pm\) 1.3 & 64.8 \(\pm\) 1.2 & 63.6 \(\pm\) 1.6 & 81.7 \(\pm\) 0.8 & 71.5 \(\pm\) 0.8 & 79.4 \(\pm\) 0.6 \\ \cline{1-1} & GraphGLOW\({}_{\text{cos}}\) & 69.9 \(\pm\) 0.7 & 70.8 \(\pm\) 1.4 & 65.2 \(\pm\) 1.8 & 62.7 \(\pm\) 1.3 & 82.0 \(\pm\) 0.7 & 71.9 \(\pm\) 0.9 & 78.7 \(\pm\) 0.8 \\ \cline{1-1} & GraphGLOW\({}_{\text{at}}\) & 69.3 \(\pm\) 0.8 & 70.9 \(\pm\) 1.3 & 65.0 \(\pm\) 1.3 & 65.0 \(\pm\) 1.7 & 81.9 \(\pm\) 0.9 & 71.3 \(\pm\) 0.7 & 78.8 \(\pm\) 0.6 \\ \cline{1-1} & GraphGLOW & **71.8 \(\pm\) 0.9** & 71.5 \(\pm\) 0.8 & **70.6 \(\pm\) 1.4** & **66.8 \(\pm\) 1.1** & **83.5 \(\pm\) 0.6** & 73.6 \(\pm\) 0.6 & 79.8 \(\pm\) 0.8 \\ \cline{1-1} & GraphGLOW\({}^{*}\) & 71.1 \(\pm\) 0.3 & **72.2 \(\pm\) 0.5** & 70.3 \(\pm\) 0.9 & **66.8 \(\pm\) 1.4** & **83.5 \(\pm\) 0.6** & **73.9 \(\pm\) 0.7** & **79.9 \(\pm\) 0.5** \\ \hline \end{tabular}
\end{table}
Table 1. Test accuracy (%) on target graphs for in-domain generalizations. For each social network (resp. citation network) as target dataset, we consider the other social networks (resp. citation networks) as source graphs. GraphGLOW\({}^{*}\) is an oracle model that shares the same architecture as our model GraphGLOW and is directly trained on target graphs.
Cornell5 and Amherst41. The possible reasons are two-fold. First, there exist sufficient shared patterns among citation networks (resp. social networks), which paves the way for successful generalization of GraphGLOW. Second, GraphGLOW\({}^{*}\) could sometimes overfit specific datasets, since the amount of free parameters are regularly orders-of-magnitude more than the number of labeled nodes in the dataset. The results also imply that our transfer learning approach can help to mitigate over-fitting on one dataset. Moreover, GraphGLOW can generalize structure learner to unseen graphs that is nearly three times larger than training graphs, i.e., Cornell5.
### Cross-domain Generalization
We next consider a more difficult task, transferring between social networks and citation networks. The difficulty stems from two aspects: 1) social networks and citations graphs are from distinct categories thus have larger underlying data-generating distribution gaps; 2) they have varied homophily ratios, which indicates that the observed edges play different roles in original graphs. In Table 2 we report the results. Despite the task difficulty, GraphGLOW manages to achieve superior results than GCN and also outperforms other non-parametric graph structure learning methods throughout all cases. This suggests GraphGLOW's ability for handling target graphs with distinct properties.
In Fig. 4 we further compare GraphGLOW with three state-of-the-art graph structure learning models that are directly trained on target graphs. Here we follow the setting in Table 2. The results show that even trained on source graphs that are different from the target one, GraphGLOW still performs on par with the competitors that are trained and tested on (the same) target graphs. Notably, GraphGLOW significantly reduces training time. For instance, in John Hopkins55, GraphGLOW is 6x, 9x and 40x faster than IDGL, LDS and VGCN, respectively. This shows one clear advantage of GraphGLOW in terms of training efficiency and also verifies that our model indeed helps to reduce the significant cost of training time for structure learning on target graphs.
### Ablation Studies
We conduct ablation studies to test the effectiveness of iterative learning scheme and regularization on graphs.
**Effect of Iterative Learning.** We replace the iterative learning process as a one-step prediction (i.e., once structure estimation and updating node representations in once feed-forward computation) and compare its test accuracy with GraphGLOW. The results are shown in Fig. 5(a) where we follow the setting of Table 1. The non-iterative version exhibits a considerable drop in accuracy (as large as 5.4% and 8.8% when tested on target graphs Cornell5
\begin{table}
\begin{tabular}{|c|l c c c c c c c|} \hline
**Type** & **Method** & **Cornell5** & **Johns.55** & **Amherst41** & **Reed98** & **Cora** & **CiteSeer** & **PubMed** \\ \hline \multirow{8}{*}{**GNN**} & GCN & 68.6 \(\pm\) 0.5 & 70.8 \(\pm\) 1.0 & 65.8 \(\pm\) 1.6 & 60.8 \(\pm\) 1.6 & 81.6 \(\pm\) 0.4 & 71.6 \(\pm\) 0.3 & 78.8 \(\pm\) 0.6 \\ & SAGE & 68.7 \(\pm\) 0.8 & 67.5 \(\pm\) 0.9 & 66.3 \(\pm\) 1.8 & 63.9 \(\pm\) 1.9 & 81.4 \(\pm\) 0.6 & 71.6 \(\pm\) 0.5 & 78.6 \(\pm\) 0.7 \\ & GAT & 69.6 \(\pm\) 1.2 & 69.4 \(\pm\) 0.7 & 68.7 \(\pm\) 2.1 & 64.5 \(\pm\) 2.5 & 83.0 \(\pm\) 0.7 & 72.1 \(\pm\) 1.1 & 79.0 \(\pm\) 0.4 \\ & GPR & 68.8 \(\pm\) 0.7 & 69.6 \(\pm\) 1.3 & 66.2 \(\pm\) 1.5 & 62.7 \(\pm\) 2.0 & 83.1 \(\pm\) 0.7 & 72.4 \(\pm\) 0.8 & 79.6 \(\pm\) 0.5 \\ & APPNP & 68.5 \(\pm\) 0.8 & 69.1 \(\pm\) 1.4 & 65.9 \(\pm\) 1.3 & 62.3 \(\pm\) 1.5 & 82.7 \(\pm\) 0.5 & 71.9 \(\pm\) 0.5 & 79.2 \(\pm\) 0.3 \\ & H\({}_{2}\)GCN & 71.4 \(\pm\) 0.5 & 68.3 \(\pm\) 1.0 & 66.5 \(\pm\) 2.2 & 65.4 \(\pm\) 1.3 & 82.5 \(\pm\) 0.8 & 71.4 \(\pm\) 0.7 & 79.4 \(\pm\) 0.4 \\ & CPGNN & 71.1 \(\pm\) 0.5 & 68.7 \(\pm\) 1.3 & 66.7 \(\pm\) 0.8 & 63.6 \(\pm\) 1.8 & 80.8 \(\pm\) 0.4 & 71.6 \(\pm\) 0.4 & 78.5 \(\pm\) 0.7 \\ \hline \multirow{4}{*}{**Graph Structure Learning**} & GraphGLOW\({}_{\text{dp}}\) & 71.5 \(\pm\) 0.7 & 71.3 \(\pm\) 1.2 & 68.5 \(\pm\) 1.6 & 63.2 \(\pm\) 1.2 & 83.1 \(\pm\) 0.8 & 71.7 \(\pm\) 1.0 & 77.3 \(\pm\) 0.8 \\ & GraphGLOW\({}_{\text{knn}}\) & 69.4 \(\pm\) 0.8 & 71.0 \(\pm\) 1.3 & 64.8 \(\pm\) 1.2 & 63.6 \(\pm\) 1.6 & 81.7 \(\pm\) 0.8 & 71.5 \(\pm\) 0.8 & 79.4 \(\pm\) 0.6 \\ & GraphGLOW\({}_{\text{cos}}\) & 69.9 \(\pm\) 0.7 & 70.8 \(\pm\) 1.4 & 65.2 \(\pm\) 1.8 & 62.7 \(\pm\) 1.3 & 82.0 \(\pm\) 0.7 & 71.9 \(\pm\) 0.9 & 78.7 \(\pm\) 0.8 \\ & GraphGLOW\({}_{\text{at}}\) & 69.9 \(\pm\) 1.0 & 70.4 \(\pm\) 1.5 & 64.4 \(\pm\) 1.2 & 65.0 \(\pm\) 1.7 & 82.5 \(\pm\) 0.9 & 71.8 \(\pm\) 0.8 & 78.5 \(\pm\) 0.7 \\ & GraphGLOW & **72.0 \(\pm\) 1.0** & 71.8 \(\pm\) 0.7 & 69.8 \(\pm\) 1.3 & **67.3 \(\pm\) 1.2** & 83.2 \(\pm\) 0.4 & 73.8 \(\pm\) 0.9 & 79.6 \(\pm\) 0.7 \\ & GraphGLOW\({}^{*}\) & 71.1 \(\pm\) 0.3 & **72.2 \(\pm\) 0.5** & **70.3 \(\pm\) 0.9** & 66.8 \(\pm\) 1.4 & **83.5 \(\pm\) 0.6** & **73.9 \(\pm\) 0.7** & **79.9 \(\pm\) 0.5** \\ \hline \end{tabular}
\end{table}
Table 2. Test accuracy (%) on target graphs for cross-domain generalizations. For each social network (resp. citation network) as target dataset, we consider citation networks (resp. social networks) as source graphs.
Figure 4. Comparison of test accuracy and training time with SOTA structure learning models (LDS (10), IDGL (6) and VGCN (8)). The radius of circle is proportional to standard deviation. The experiments are run on one Tesla V4 with 16 GPU memory. We adopt the same setting as Table 2 and report the results on target datasets. For Cornell5 and PubMed, the competitor models suffer out-of-memory.
and Amherst41, respectively). Therefore, the iterative updates indeed help to learn better graph structures and node embeddings, contributing to higher accuracy for downstream prediction.
**Effect of Regularization on structures.** We remove the regularization on structures (i.e., setting \(\alpha=\rho=0\)) and compare with GraphGLOW. As shown in Fig. 5(a), there is more or loss performance degradation. In fact, the regularization loss derived from the prior distribution for latent structures could help to provide some guidance for structure learning, especially when labeled information is limited.
### Hyper-parameter Sensitivity
In Fig. 7 (in the appendix), we study the variation of model's performance w.r.t. \(\lambda\) (the weight on input graphs) and \(P\) (the number of pivots) on target datasets Cora and CiteSer. Overall, the model is not sensitive to \(\lambda\)'s. For Cora, larger \(\lambda\) contributes to higher accuracy, while for CiteSer, smaller \(\lambda\) yields better performance. The possible reason is that the initial graph of Cora is more suitable for message passing (due to higher homophily ratio). For the impact of pivot number, as shown in Fig. 7(b), a moderate value of \(P\) could provide decent downstream performance.
### Robustness Analysis
In addition, we find that GraphGLOW is more immune to edge deletion attack than GCN. We randomly remove 10-50% edges of target graphs respectively, and then apply GraphGLOW and GCN. We present the results in Johns Hopkins55 in Fig. 5(b) and leave more results in Appendix D. When the drop ratio increases, the performance gap between two models becomes more significant. This is due to our structure learner's ability for learning new graph structures from node embeddings, making it less reliant on initial graph structures and more robust to attack on input edges.
### Case Study
We further probe into why our approach is effective for node classification by dissecting the learnt graph structures. Specifically, we measure the homophily ratios of learnt structures and their variance of neighborhood distributions of nodes with same labels. As nodes receive messages from neighbors in message passing, the more similar the neighborhood patterns of nodes within one class are, the easier it is for GNNs to correctly classify them (Zhu et al., 2017). We use homophily metric proposed in (Zhu et al., 2017) to measure homophily ratios. For calculation of variance of neighborhood distribution, we first calculate variance for each class, and then take weighted sum to get the final variance, where the weight is proportional to the number of nodes within corresponding class.
**Homophily Ratio.** We choose Amherst41, Johns Hopkins55 and Reed98 as target graphs, and record the homophily ratios of inferred latent structures every five epochs during training. As shown in Fig. 6(a), the homophily ratios of inferred latent graphs exhibit a clear increase as the training epochs become more and the final ratio is considerably larger than that of input graph. The results indicate that the trained structure learner incline to output more homophilous latent structures that are reckoned to be more suitable for message passing.
**Neighborhood Distribution Variance.** As shown in Fig. 6(b), the variance of neighborhood distribution of nodes with the same label is significantly smaller in our learnt structure, making it easier to classify nodes through message passing. The results also imply that high homophily ratio and similar intra-class neighborhood patterns could be two of the underlying transferable patterns of optimal message-passing structure, identified by GraphGLOW.
## 6. Conclusion
This paper proposes _Graph Structure Learning Under Cross-Graph Distribution Shift_, a new problem that requires structure learner to transfer to new target graphs without re-training and handles distribution shift. We develop a transfer learning framework that guides the structure learner to discover shared knowledge across source datasets with respect to optimal message-passing structure for boosting downstream performance. We also carefully design the model components and training approach in terms of expressiveness, scalability and stability. We devise experiments with various difficulties and demonstrate the efficacy and robustness of our approach. Although our framework is pretty general, we believe their are other potential methods that can lead to equally competitive results, which we leave as future work.
###### Acknowledgements.
The work was supported in part by National Key Research and Development Program of China (2020AAA0107600), NSFC (62222607), Science and Technology Commission of Shanghai Municipality (22511105100), and Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102).
Figure 5. (a) Ablation study for GraphGLOW. (b) Performance comparison of GraphGLOW and GCN w.r.t. randomly removing certain ratios of edges in Johns Hopkins55.
Figure 6. (a) The curves of homophily ratios for latent structures during the learning process. (b) The variance of neighborhood distribution of nodes with the same label in original graphs and learnt structure. |
2310.07492 | Boosting Black-box Attack to Deep Neural Networks with Conditional
Diffusion Models | Existing black-box attacks have demonstrated promising potential in creating
adversarial examples (AE) to deceive deep learning models. Most of these
attacks need to handle a vast optimization space and require a large number of
queries, hence exhibiting limited practical impacts in real-world scenarios. In
this paper, we propose a novel black-box attack strategy, Conditional Diffusion
Model Attack (CDMA), to improve the query efficiency of generating AEs under
query-limited situations. The key insight of CDMA is to formulate the task of
AE synthesis as a distribution transformation problem, i.e., benign examples
and their corresponding AEs can be regarded as coming from two distinctive
distributions and can transform from each other with a particular converter.
Unlike the conventional \textit{query-and-optimization} approach, we generate
eligible AEs with direct conditional transform using the aforementioned data
converter, which can significantly reduce the number of queries needed. CDMA
adopts the conditional Denoising Diffusion Probabilistic Model as the
converter, which can learn the transformation from clean samples to AEs, and
ensure the smooth development of perturbed noise resistant to various defense
strategies. We demonstrate the effectiveness and efficiency of CDMA by
comparing it with nine state-of-the-art black-box attacks across three
benchmark datasets. On average, CDMA can reduce the query count to a handful of
times; in most cases, the query count is only ONE. We also show that CDMA can
obtain $>99\%$ attack success rate for untarget attacks over all datasets and
targeted attack over CIFAR-10 with the noise budget of $\epsilon=16$. | Renyang Liu, Wei Zhou, Tianwei Zhang, Kangjie Chen, Jun Zhao, Kwok-Yan Lam | 2023-10-11T13:39:11Z | http://arxiv.org/abs/2310.07492v1 | # Boosting Black-box Attack to Deep Neural Networks with Conditional Diffusion Models
###### Abstract
Existing black-box attacks have demonstrated promising potential in creating adversarial examples (AE) to deceive deep learning models. Most of these attacks need to handle a vast optimization space and require a large number of queries, hence exhibiting limited practical impacts in real-world scenarios. In this paper, we propose a novel black-box attack strategy, Conditional Diffusion Model Attack (CDMA), to improve the query efficiency of generating AEs under query-limited situations. The key insight of CDMA is to formulate the task of AE synthesis as a distribution transformation problem, i.e., benign examples and their corresponding AEs can be regarded as coming from two distinctive distributions and can transform from each other with a particular converter. Unlike the conventional _query-and-optimization_ approach, we generate eligible AEs with direct conditional transform using the aforementioned data converter, which can significantly reduce the number of queries needed. CDMA adopts the conditional Denoising Diffusion Probabilistic Model as the converter, which can learn the transformation from clean samples to AEs, and ensure the smooth development of perturbed noise resistant to various defense strategies. We demonstrate the effectiveness and efficiency of CDMA by comparing it with nine state-of-the-art black-box attacks across three benchmark datasets. On average, CDMA can reduce the query count to a handful of times; in most cases, the query count is only ONE. We also show that CDMA can obtain \(>99\%\) attack success rate for untarget attacks over all datasets and targeted attack over CIFAR-10 with the noise budget of \(\epsilon=16\).
Adversarial Example, Adversarial Attack, Black-box Attack, Generative-based Attack, Conditional Diffusion Model.
## I Introduction
In recent years, Deep Learning (DL) has experienced rapid development, and DL models are widely deployed in many real-world applications, such as facial recognition [1], autonomous driving [2], financial services [3], etc. However, existing DL models have been proven to be fragile that they can be easily fooled by adding elaborately calculated imperceptible perturbations to the benign inputs, known as adversarial examples (AEs). Therefore, the security of DL models has been attracting more and more attention from researchers.
Typically, adversarial attacks can be classified into two categories based on their settings. The first one is white-box attacks [4, 5], where the attacker has complete information of the victim models, including the model structure, weights, gradients, etc. Such information can assist the attacker to achieve a very high attack success rate. A variety of attack techniques have been proposed to effectively generate AEs under the white-box setting, e.g., FGSM [6], C&W [4], etc.
The second one is black-box attacks [7, 8, 9, 10, 11], which is more practical in the real world. The attacker is not aware of the victim model's information. He has to repeatedly query the victim model with carefully crafted inputs and adjust the perturbations based on the returned soft labels (prediction probability) or even hard labels [12, 13]. Many query-efficient and transfer-based attack methods have been proposed recently. However, they suffer from several limitations. First, these methods still need hundreds to thousands of queries to generate one AE [8]), especially in the targeted attack setting. This makes the attack costly in terms of computation resources, time and monetary expense, restricting their practicality in real-world scenarios. Besides, more queries can remarkably increase the risk of being detected [14]. Second, AEs generated by the noise-adding manner are easy to be identified or denoised, decreasing the attack performance to a large extent [15, 16]. Once the victim model is equipped with some defense mechanisms, the attacker needs to consume more model queries to optimize a new AE. Third, the quality of the generated AE highly depends on the similarity between the local surrogate model and the victim model, which normally cannot be guaranteed. This also limits the performance of existing attack methods.
Driven by the above drawbacks, the goal of this paper is to design new hard-label black-box attack approaches, which can generate AEs with limited queries for both untarget and targeted settings. This is challenging due to the restrictions of limited information about the victim model, and query budget. Our observation is that _clean samples and their corresponding AEs follow two adjacent distributions, connected by certain relationships_. This presents an opportunity to _build a converter, which can easily transfer each clean sample to its corresponding AE without complex optimization operations_. Following this hypothesis, we propose CDMA, a novel Conditional Diffusion Model Attack to attack black-box DL models efficiently. Different from prior attacks using the iterative query-and-optimization strategy, CDMA converts the AE generation task into an image translation task, and adopts a conditional diffusion model (i.e., the converter) to directly
synthesize high-quality AEs. In detail, we first execute the diffusion process to train a conditional diffusion model with pre-collected pair-wised clean-adversarial samples, where the AEs are generated with white-box attack methods from local shadow models. During the training, the clean images are used as the condition to guide the diffusion model to generate eligible AEs from a given unique input. Once the diffusion model is trained, we can execute the reverse process for the clean input to formulate corresponding AEs.
Compared to existing works, CDMA has the following advantages. (1) It significantly improves the attack effectiveness by conditional synthesis instead of query and optimization. (2) CDMA does not rely on the inherent attribute of the target model. It only requires the hard labels to verify whether the victim model has been attacked successfully. As a result, the pre-trained diffusion model has a high generalization ability to attack any DL models. (3) Once the diffusion model is well-trained, the attacker can batch-wisely sample sufficient candidate AEs, further improving the attack efficiency and scalability. (4) Benefiting from the smooth synthesizing processes, the formulated AEs are challenging to be purified and can keep high robustness against different defense mechanisms.
We evaluate CDMA on mainstream datasets (CIFAR-10, CIFAR-100 and Tiny-ImageNet-200), and compare it with state-of-the-art black-box attack methods, including pure black-box attacks (soft- and hard-label), query-based and transfer-based attacks. Extensive experiment results demonstrate our superior query efficiency. In all attack settings, CDMA achieves a comparable attack success rate to all baselines but with significantly reduced numbers of queries. Besides, AEs generated from CDMA exhibit higher robustness to several mainstream defense strategies. Finally, the empirical results of data-independent and model-independent attacks have validated our assumption, i.e., the clean and adversarial examples come from two disparate distributions, which can be transformed into each other, and the proposed CDMA can well learn this transformation relationship.
To summarize, our main contributions are as follows:
* We model the adversarial example generation as a distribution transform problem with a perfect data converter on certain conditions to achieve efficient black-box attacks.
* We build the data converter with a diffusion model and propose a novel diffusion model-based black-box attack named CDMA, which can directly formulate the corresponding AE by conditional sampling on the original clean image without the complex iterative process of query and optimization.
* CDMA can generate AEs with high attack ability and robustness. These AEs can be well transferred to different victim models and datasets.
* We perform extensive experiments to demonstrate the superiority of CDMA over state-of-the-art black-box methods, in terms of query efficiency, attack robustness and effectiveness in both untargeted and targeted settings.
The remainder of this paper is organized as follows: we briefly review the existing literature on adversarial attacks in Sec. II. We define our distribution transformation-based attack and propose the diffusion model-based CDMA method in Sec. III. In Sec. IV, we perform extensive experiments to show that CDMA is more efficient and effective than other baseline attacks under untarget and targeted situations. It can also successfully keep the high attack performance against different defense strategies. Finally, we conclude this paper in Sec. V.
## II Related Work
Adversarial attacks against deep learning models refer to the process of intentionally manipulating benign inputs to fool well-trained models. Based on the setting, existing attacks can be classified into two categories: in the _white-box_ setting, the attacker knows every detail about the victim model, based on which he creates the corresponding AEs. In the _black-box_ setting, the attacker does not have the knowledge of the victim model, and are only allowed to query the model for AE generation. In this paper, we focus on the black-box one, which is more practical but also more challenging.
There are three types of techniques to achieve black-box adversarial attacks. The first one is transfer-based attacks. Papernot et al. [17] proposed the pioneering work towards black-box attacks, which first utilizes Jacobian-based Dataset Augmentation to train a substitute model by iteratively querying the oracle model, and then attacking the oracle using the transferability of AEs generated from the substitute model. TREMBA [18] trains a perturbation generator and traverses over the low-dimensional latent space. ODS [19] optimizes in the logit space to diversify perturbations for the output space. GFCS [20] searches along the direction of surrogate gradients and falls back to ODS. CG-Attack [21] combines a well-trained c-glow model and CAM-ES to extend attacks. However, these transfer-based attacks heavily rely on the similarity between the substitute model and the oracle model.
The second type is score-based attacks. Ilyas et al. [13] proposed a bandit optimization-based algorithm to integrate priors, such as gradient priors, to reduce the query counts and improve the attack success rate. Chen et al. [22] proposed zeroth order optimization-based attacks (ZOO) to directly estimate the gradients of the target DNN for generating AEs. Although this attack achieves a comparable attack success rate, its coordinate-wise gradient estimation requires excessive evaluations of the target model and is hence not query-efficient. AdvFlow [8] combines a normalized flow model and NES to search the adversarial perturbations in the latent space to balance the query counts and distortion and accelerate the attack process.
The third type is decision-based attacks, which are specifically designed for the hard-label setting. Boundary attack [12] is the earliest one that starts from a large adversarial perturbation and then seeks to reduce the magnitude of perturbation while keeping it adversarial. Bayes_Attack [9] uses Bayesian optimization to find adversarial perturbations in the low-dimension subspace and maps it back to the original input space to obtain the final perturbation. NPAttack [11] considers the structure information of pixels in one image rather than individual pixels during the attack with the help of a pre-trained Neural Process model. Rays [7] introduces a Ray Searching Method to reformulate the continuous problem of
finding the nearest decision boundary as a discrete problem that does not require any zero-order gradient estimation, which significantly improves the previous decision-based attacks. Triangle Attack (TA) [10] optimizes the perturbation in the low-frequency space by utilizing geometric information for effective dimensionality reduction.
The above query-and-optimization black-box attacks are inefficient and uneconomical because they require thousands of queries on the target model. In this situation, the time and computational consumption could be very considerable. On the other hand, the performance of transfer-based black-box attacks is often limited by the similarity between the surrogate model and the oracle model. Besides, these attacks cannot extend to the data-independent or model-independent scenario or keep robustness to different defense strategies, which fades the attack capability to a considerable extent.
Therefore, it is necessary to have a method that can efficiently generate AEs within limited queries, which are effective against different models and datasets. We propose to use the diffusion model to achieve this goal. The diffusion model is an advanced technique for image translation tasks. We can train such a model to convert clean images to AEs against the black-box victim model. Our attack, CDMA, does not require a large number of queries or detailed information regarding the victim model in the attacking process and the formulated AEs can be resistant to most defense strategies.
## III Methodology
### _Problem Definition_
Given a well-trained DNN model \(\mathbf{\mathcal{M}}\) and an input \(\mathbf{x}\) with its corresponding label \(y\), we have \(\mathbf{\mathcal{M}}(\mathbf{x})=y\). The AE \(\mathbf{x}^{adv}\) is a neighbor of \(\mathbf{x}\) that satisfies \(\mathbf{\mathcal{M}}(\mathbf{x}^{adv})\neq y\) and \(\left\|\mathbf{x}^{adv}-\mathbf{x}\right\|_{p}\leq\epsilon\), where \(\mathbf{L}_{p}\) norm is used as the metric function and \(\epsilon\) is a small noise budget. With this definition, the problem of generating an AE becomes a constrained optimization problem:
\[\mathbf{x}_{adv}=\left.\begin{array}{l}arg\ max\mathcal{L}\left(\mathbf{\mathcal{M }}(\mathbf{x}^{adv})\neq y\right),\\ \end{array}\right. \tag{1}\]
where \(\mathcal{L}\) stands for a loss function that measures the confidence of the model outputs.
Existing attack methods normally utilize the information (e.g., the prediction results, model weights, etc.) obtained from the target model to optimize the above loss function. Different from them, in this paper, we convert the AE generation problem into an image-to-image task: an adversarial image \(\mathbf{x}_{adv}\) can be regarded as a particular transformation from its corresponding clean image \(\mathbf{x}\). These two different images (\(\mathbf{x}\) and \(\mathbf{x}_{adv}\)) can be mutually transformed from each other by a converter. We choose a rising star generative model, the diffusion model, as our image converter and propose a Conditional Diffusion Model-based Attack framework for synthesizing AEs.
### _Denoising Diffusion Probabilistic Models_
Unlike VAE or Flow models, diffusion models are inspired by non-equilibrium thermodynamics to learn through a fixed process. The latent space has a relatively high dimensionality. It first defines a Markov chain of diffusion steps and corrupts the training data by continuously adding Gaussian noise until it becomes pure Gaussian noise. Then it reverses the process by removing noise and reconstructing the desired data. Once the model is well-trained, it can generate data through the learned denoising process by inputting randomly sampled noise. Here, we briefly review the representative Denoising Diffusion Probabilistic Models (DDPM) [23].
In the forward progress (i.e., adding noise), given an image \(x_{0}\sim q(x)\), the diffusion process can obtain \(x_{1},x_{2},...,x_{T}\) by adding Gaussian noise \(T\) times, respectively. This process can be expressed as a Markov chain:
\[q(x_{t}|x_{t-1}) =\mathcal{N}(x_{t};\sqrt{1-\beta_{t}}x_{t-1},\beta_{t}\text{I}), \tag{2}\] \[q(x_{1:T}|x_{0}) =\prod_{t=1}^{T}q(x_{t}|x_{t-1})=\prod_{t=1}^{T}(x_{t};\sqrt{1- \beta_{t}}x_{t-1},\beta_{t}\text{I})\]
where \(t\in{1,2,...,T}\), \(\left\{\beta_{t}\in(0,1)\right\}_{t=1}^{T}\) is the hyper-parameter of the Gaussian distribution's variance. In this process, \(x_{t}\) tends to be pure Gaussian noise with the increase of \(t\). It finally becomes the Standard Gaussian noise \(\mathcal{N}(0,\text{I})\) when \(T\rightarrow\infty\).
Suppose \(\alpha_{t}:=1-\beta_{t}\) and \(\bar{\alpha}_{t}:=\prod_{i=1}^{T}\alpha_{i}\). Then \(x\) of arbitrary \(t\) can be written in the following closed form:
\[\begin{array}{l}q(x_{t}|x_{0})=\mathcal{N}(x_{t};\sqrt{\bar{\alpha}_{t}}x_ {0},(1-a_{t})\text{I}),\\ x_{t}=\sqrt{\bar{\alpha}_{t}}x_{0}+\sqrt{1-\bar{\alpha}_{t}}\delta\end{array} \tag{3}\]
Fig. 1: Overview of CDMA, \(\mathcal{D}\{X,X^{adv}\}\) is the collected pair-wised clean and adversarial dataset and \(X^{adv}_{t}\) is the adversarial example \(X^{adv}\) at the forward or reverse step \(t\). \(\epsilon\sim\mathcal{N}(0,\text{I})\) is the Gaussian noise and \(\mathcal{M}(\cdot)\) is the target victim model.
where \(\delta\sim\mathcal{N}(0,\text{I})\). \(x_{t}\) satisfies \(q(x_{t}|x_{0})=\mathcal{N}(x_{t}\sqrt{\bar{a}_{t}}x_{0},(1-\bar{a}_{t})\text{I})\).
The reverse process is the denoising of diffusion. If we can gradually obtain the reversed distribution \(q(x_{t-1}|x_{t})\), we can restore the original image \(x_{0}\) from the standard Gaussian distribution \(\mathcal{N}(0,\text{I})\).
As \(q(x_{t}|x_{t-1})\) is a Gaussian distribution and \(\beta_{t}\) is small enough, \(q(x_{t-1}|x_{t})\) is a Gaussian distribution. However, we do not have a simple way to infer \(q(x_{t-1}|x_{t})\). DDPM adopts a deep neural network, typically U-Net, to predict the mean and covariance of \(x_{t-1}\) of the given input \(x_{t}\). In this situation, the reverse process can be written as the parameterized Gaussian transitions:
\[\begin{split} p_{\theta}(X_{0:T})=P(x_{T})\prod_{t=1}^{T}p_{ \theta}(x_{t-1}|x_{t}),\\ P_{\theta}(x_{t-1}|x_{t})=\mathcal{N}(x_{t-1};\mu(x_{t},t),\sum_{ \theta}(x_{t},t))\end{split} \tag{4}\]
With Bayes's theorem, DDPM predicts the noise \(\delta_{\theta}(x_{t},t)\) instead and computes \(\mu(x_{t},t)\) as follows:
\[\mu(x_{t},t)=\frac{1}{\sqrt{\alpha_{t}}}(x_{t}-\frac{\beta_{t}}{\sqrt{1-\bar{ \alpha}_{t}}}\delta_{\theta}(x_{t},t)) \tag{5}\]
### _Conditional Diffusion Model Attack (CDMA)_
The whole framework of CDMA is illustrated in Figure 1, which can be split into the following three stages: training sample collection, model training (forward process), and AE generating (reverse process). Specifically, in **Stage I**, the attacker collects the clean-adversarial example pairs, where the adversarial examples are built from local shadow models using standard white-box attack techniques. In **Stage II**, the attacker trains a conditional diffusion model with the pair-wised \((\mathbf{x},\mathbf{x}_{adv})\) sampled from the pre-collocated dataset \(\mathcal{D}\{X,X^{adv}\}\). The conditional diffusion model is composed of a series of encoder-decoder-like neural networks (UNET [24] is adopted in this work). Once the model is well-fitted, the attacker can perform the attacks against the victim model in a sampling manner in **Stage III** instead of a query-and-optimization way. Below we give details of each stage.
#### Iii-C1 **Training Sample Collection**
Recall that our training data are paired with clean and adversarial samples, where the clean example is used as an inference image and concatenated with its corresponding adversarial example to compose the diffusion model's input. More specifically, for a given dataset, we first use typical white-box attack methods (e.g., PGD [25] and et al.) to attack the local shadow model and obtain the corresponding adversarial examples, which are then paired with the original clean examples to formulate the training dataset \(\mathcal{D}=\{X,X^{adv}\}\) of our diffusion model.
#### Iii-C2 **Conditional Diffusion Model Training**
The core of training a diffusion model is to make it predict reliable noise \(\delta\). Unlike [23], we need to consider the additional conditional variable \(x\). We use \(\delta\) to represent the real noise added to \(x^{adv}\) at each step \(t\), and use \(\tilde{\delta}_{\theta}\) to represent the noise predicted by model \(f(\cdot)\) (U-NET in this paper). Then the final objective function can be written as:
\[\mathcal{L}=E_{t,\{x,\tilde{\sigma}_{0}^{adv}\},\hat{\delta}}\left\|\delta- \hat{\delta}_{\theta}(x_{t}^{adv},t,x)\right\|_{p} \tag{6}\]
where \(t\sim[1,2,...,T],\{x,x_{0}^{adv}\}\sim\mathcal{D}\{x,x^{adv}\},x_{t}^{adv} \sim q(x_{t}^{adv}|x_{0}^{adv},x)\), \(\delta\sim\mathcal{N}(0,\text{I})\), \(\|\cdot\|_{p}\) represents the \(L_{p}\)-norm and \(p\in\{0,1,2,L_{\infty}\}\). As demonstrated in [26], \(L_{1}\) yields significantly lower sample diversity compared to \(L_{2}\). Since we aim to generate diversified adversarial examples, we also adopt \(L_{2}\), i.e., MSE, as our loss function to constrain the true noise \(\delta\) and the predicted noise \(\hat{\delta}_{\theta}\).
```
0:\(\{x,x^{adv}\}\): the clean image and adversarial image pair; \(t\sim\mathcal{U}(1,...,T)\): The time-steps belong to Uniform distribution.
0: The well-trained model \(M(\cdot)\).
1:repeat
2: Take the gradient step on \(\mathcal{L}=E_{t,\{x,x_{0}^{adv}\},\hat{\delta}}\left\|\hat{\delta}-\hat{ \delta}_{\theta}(x_{t}^{adv},t,x)\right\|_{p}\).
3:until converged
```
**Algorithm 1** Conditional Diffusion Model Training
#### Iii-C3 **Generate Adversarial Examples.**
In CDMA, the attacker generates adversarial examples for benign images by sampling from the well-trained conditional diffusion model. Our generation process becomes sampling from the conditional distribution \(P(x_{0}^{adv}|c)\), where \(c\) is the clean image \(x\). As the aforementioned sampling process of DDPM [23, 27] (Eq. 4), the conditional sampling can be written as follows:
\[\begin{split} p_{\theta}(x_{0}^{adv}|x)=\int p_{\theta}(x_{0:T}^{ adv}|x)dx_{1:T}^{adv}\\ p_{\theta}(x_{0}^{adv}|x)=p(x_{T}^{adv})\prod_{t=1}^{T}p_{ \theta}(x_{t-1}^{adv}|x_{t}^{adv},x)\end{split} \tag{7}\]
Here each transition \(p_{\theta}(x_{t-1}^{adv}|x_{t}^{adv},x)\) in the sampling process depends on the condition \(x\), i.e., the clean image. The sampling (Eq.4) in the conditional version is re-written as:
\[p_{\theta}(x_{t-1}^{adv}|x_{t}^{adv},x)=\mathcal{N}(x_{t-1}^{adv};\mu_{\theta}(x_{ t}^{adv},t,x),\sum_{\theta}(x_{t}^{adv},t,x)) \tag{8}\]
As shown in Eq. 8, CDMA generates the adversarial example \(\mathbf{x}^{adv}\) via the diffusion model's reverse Markov process and starts from \(\mathbf{x}_{0}^{adv}=\epsilon\sim\mathcal{N}(0,\text{I})\) with the conditional clean image \(x\). To make the final adversarial examples meet the similarity requirements, we impose the extra \(clip(\cdot)\) constraints on \(L_{\infty}\)-norm as:
\[\mathbf{x}_{final}^{adv}=clip(clip(\mathbf{x}_{0}^{adv},x-\epsilon,x+\epsilon),0,1) \tag{9}\]
where \(\epsilon\) is the adversarial perturbation budget.
The training and attacking algorithms of CDMA are listed in Alg. 1 and Alg. 2, respectively, which could help readers to re-implement our method step-by-step.
## IV Evaluation
We present the experimental results of CDMA. We first compare it with other black-box attack baselines in untarget and targeted scenarios. Then we measure the attack effectiveness against state-of-the-art defenses. Next, we show the results of data-independent and model-independent attacks. Finally, we show the ablation study results to explore the attack ability of CDMA under different settings.
### _Experimental Setup_
**Implementation.** We set the maximum number of queries as \(Q_{max}=1000\) to simulate a realistic attack scenario. We stop the attack once a specific input is mispredicted by the victim model successfully. We set the noise budget as \(\epsilon=8/255\). and \(\epsilon=16/255\)., which is shortened as \(\epsilon=8\) and \(\epsilon=16\) for all attacks. To train the diffusion model in CDMA, the total number of diffusion steps is \(T=2000\). The number of training epochs is \(E=1e8\) with the batch size of \(B=256\). The noise scheduler is "linear" starting from \(1e-6\) and ending with \(0.01\). All the experiments are conducted on a GPU server with 4*NVIDIA Tesla A100 40GB GPU, 2*Xeon Glod 6112 CPU and RAM 512GB.
**Datasets.** We verify the performance of CDMA on three benchmark datasets for computer vision task, named CIFAR-101 [28], CIFAR-1002 [28] and Tiny-ImageNet-2003 [29]. In detail, CIFAR-10 contains 50,000 training images and 10,000 testing images with the size of \(3\times 32\times 32\) from 10 classes; CIFAR-100 has 100 classes, including the same number of training and testing images as the CIFAR-10; Tiny-ImageNet-200 has 200 categories, containing about 1.3M samples for training and 50,000 samples for validation. In our experiments, we first generate adversarial examples using white-box attacks on the training set of the above three datasets to train the diffusion model, and then randomly sample 1,000 images from the test set of these datasets for attacks.
Footnote 1: [http://www.cs.toronto.edu/](http://www.cs.toronto.edu/) kriz/cifar.html
Footnote 2: [http://www.cs.toronto.edu/](http://www.cs.toronto.edu/) kriz/cifar.html
Footnote 3: [http://cs231n.stanford.edu/tiny-imagenet-200.zip](http://cs231n.stanford.edu/tiny-imagenet-200.zip)
**Models.** We train a few widely-used deep neural networks, including VGG [30], Inception [31, 32], ResNet [33], and DenseNets [34] over the aforementioned datasets until the models achieve the best classification results. Among them, We adopt VGG-13, ResNet-18 and DeseNet-121 as the shadow models for all query- and transfer-based baselines and CDMA, while VGG-19, Inception-V3, ResNet-50 and DenseNet-169 as the victim models to be attacked for all the methods. The top-1 classification accuracy of these victim models are 90.48%, 84.51%, 94.07%, and 94.24% for CIFAR-10, 66.81%, 77.86%, 76.05% and 77.18% for CIFAR-100 and
57.62%, 65.89%, 65.41% and 56.04% for Tiny-ImageNet-200, respectively.
**Baselines.** We select nine state-of-the-art black-box attacks as the baselines, including score-based, decision-based, and query- and transfer-based methods. These include Rays [7], AdvFlow [8], Bayes_Attack [9], TA [10], NPAttack [11], ODS [19], GFCS [20], CG-Attack [21] and MCG-Attack [35]. We reproduce the attacks from the code released in the original papers with the default settings.
**Metrics.** We perform evaluations with the following metrics: Attack Success Rate (ASR) measures the attack effectiveness. Average and Median numbers of queries (Avg.Q and Med.Q) measure the attack efficiency.
### _Comparisons with Baseline Attacks_
Tables I, II and III present the untarget and targeted attack performance comparison with all baselines under the noise budget \(\epsilon=16\) on VGG-19, Inception-V3, ResNet-50, and
DenseNet-169, respectively. Specifically, in both untarget and targeted situations, we observe that our proposed CDMA enjoys much higher efficiency in terms of the average and median numbers of queries, as well as much higher attack success rate than AdvFlow, Bayes_Attack and TA for all datasets. Compared to the rest baselines, although the attack success rate of CDMA does not exceed too much, in some cases, even lower than Rays (when extending the untarget attack on VGG-19 with CIFAR-100) and GFCS (when extending the targeted attack on VGG-19 with Tiny-ImageNet-200), the Avg.Q and Med.Q are always lowest than all methods, especially in the target setting. CDMA only needs several queries to obtain a near 100% attack success rate and the Med.Q of CDMA is 1. These experimental results demonstrate the superiority of our proposed method in terms of attack effectiveness and efficiency.
Table. IV presents the performance comparison of all attack baselines on VGG-19, Inception-V3, ResNet-50 and DenseNet-169, respectively, where the noise budget is set to \(\epsilon=8\). Although the attack becomes more challenging with a small noise budget, compared with all the attack baselines, the proposed CDMA can also get the best attack performance in most situations. Especially the average and median queries of CDMA are still the lowest in all cases, which have exhibited the high effectiveness of the proposed methods.
Figure 2 shows the attack success rate versus the number of queries for all baseline methods over CIFAR-10, CIFAR-100, and Tiny-ImageNet-200 in untarget and target attack settings. Again we can see that CDMA achieves the highest attack success rate in most situations and the best query efficiency compared with other black-box attack baselines. The results show that our proposed CDMA can achieve the highest attack success rate under the same query counts. Note that CDMA can obtain a boosting attack success rate at the first few queries, especially under targeted attack settings, while other attacks only can obtain a satisfactory attack success rate after hundreds of queries.
### _Adversarial Robustness to Defense Strategies_
To further evaluate the generated adversarial examples' robustness, we adopt some defense methods to purify or pre-process the malicious examples, and then measure their effectiveness. The defense methods in our consideration involve JPEG compression (JPEG) [36], NRP [37], pixel deflection (PD) [38], GuidedDiffusionPur (GDP) [16], RP-Regularizer (RP) [39], BitDepthReduce (BDR) [40] and MedianSmoothing2D (MS) [40]. We first synthesize adversarial examples on
ResNet-50 for CIFAR-10, and then measure their attack success rate against these defense strategies. The results are shown in Table. V. Among all the black-box attack methods, our proposed method has the highest attack success rate in most cases, which implies the adversarial examples generated by CDMA are more robust to current defense methods compared with other attacks.
### _Data-independent and Model-independent Attacks_
#### Iv-D1 **Data-independent Attack**
We carry out attacks across different datasets to verify the generalization of our CDMA. The datasets include STL-10 [41], Caltech-256 [42], Places-365 [43]) and CeleBA [44]. These datasets are not used for training the diffusion model, and we only sample images from their test set to test the effectiveness of CDMA.
In detail, we train a diffusion model based on a specific dataset (CFAIR-10 and Tiny-ImageNet-200) and then apply the attack on another dataset (STL-10, Caltech-256, Places-365 and CeleBA) to verify whether it can transform clean data into its corresponding adversarial examples. The results are listed in Table VI, which illustrates that even on the dataset not involved in the diffusion model training, CDMA can also achieve a 90+% (in some cases, even 100%) attack success rate. This phenomenon strongly supports our proposition that adversarial examples can be transformed from normal exam
Fig. 2: Queries vs. ASR on CIFAR-100 and Tiny-ImageNet for untarget and targeted attack settings. The maximal query counts are limited to 1000 and the noise budget’s \(L_{\infty}\) norm is set to \(\epsilon=16\).
ples. Furthermore, the evasion attack success rate of various query counts is illustrated in Figure 3. As we can see, CDMA can achieve a good attack success rate even in relatively fewer queries. Taking Places365 as an example, it can obtain 98% \(\sim\) 99.6% attack success rate when the noise budget is set as \(\epsilon=8\) and 99.7% \(\sim\) 100% when \(\epsilon=16\).
#### V-A2 **Model-independent Attack.**
Existing black-box attack methods can only generate adversarial samples for a specific victim model. Our CDMA is not restricted by this requirement. It can synthesize adversarial examples by performing a conditional sampling manner, and we call it the model-independent attack. Specifically, in this situation, we don't know what the victim model is but just do the conditional sampling once for the specific dataset, and then verify whether these sampled examples are adversarial or not. Figure 4 shows that the average success rate of such attack on CIFAR-10 is higher than 80% over different victim models. It can also achieve a 60+% average attack success rate on CIFAR-100 and Tiny-ImageNet. This phenomenon demonstrates that even in model-independent attack scenarios, CDMA can still generate adversarial examples and achieve good attack effects on different models. Furthermore, it illustrates the high adaptability of CDMA in model-independent black-box scenarios.
### _Transfer Attack Effectiveness_
Recall that the transferability of adversarial examples is crucial to carry out transfer attacks, especially for the black-box model deployed in the real world. Therefore following the previous works [8, 45], we examine the transferability of the generated AE for each of the attack methods in Table. VII. We randomly sample 1000 images from CIFAR-10, CIFAR-100 and Tiny-ImageNet datasets, and generate AEs against on ResNet-50 model. Then we transfer these AEs to attack the VGG-19, Inception V3 and DenseNet-169. As seen, the generated AE by CDMA transfer to other models easier than other attacks. This observation precisely matches our intuition about the mechanics of CDMA. More specifically, we know that in CDMA the model is learning a transformation between a benign image and its adversarial counterpart. Comparing to other queries and optimization attacks, which calculate specific perturbations for each sample, CDMA learns to use the transformation to build AEs. Thus, CDMA tends to generate AEs with high transferability.
Fig. 4: Model-independent attack.
Fig. 3: ASR vs. Queries under data-independent settings.
### _Ablation Study_
#### Iv-F1 **Scheduling & Steps.
Although the typical training and sampling steps of DDPM are \(T=1000\), previous work [26] shows that such number of steps for the diffusion model can be inconsistent. Here, we aim to explore the effect of the number of sampling steps \(T\) on the final attack performance without other acceleration schemes [46, 47]. The victim model is ResNet-50, the noise budget is set as \(\epsilon=8\) and \(\epsilon=16\), and the maximal number of queries is set as \(Q=10\).
As shown in Figure 5, the obtained attack success rate always fluctuates regardless of using cosine or linear sampling. Compared with liner sampling, cosine sampling can achieve a higher attack success rate with fewer sampling steps. Especially when the number of sampling steps is small, the attack success rate of linear sampling is relatively lower. For example, when the number of sampling steps is \(t=10\), the attack success rate of linear sampling is around 40%-60%, while the cosine sampling is 80%-90%. Note that for each sampling schedule, the final attack success rate is roughly the same as the number of sampling steps increases. To obtain more effective and efficient attack results, we set the sampling strategies in the attack process as follows: the sampling schedule is cosine and the number of steps is \(t=50\). By doing this, the attack effectiveness is significantly promoted owning to a smaller sample step \(t\).
#### Iv-F2 **Comparisons with Generative Attacks**
Existing generative-based attacks usually use GAN to generate adversarial perturbations. We choose the most representative one, AdvGAN, among these methods to compare with our CDMA. As the experimental results are illustrated in Figure 6, we can find that although the attack success rate of our method is lower than AdvGAN in some cases, as the number of queries increases, the attack success rate of our method will increase with the number of queries, on the contrary, AdvGAN will not, which thoroughly verifies our assertion that the AEs generated by our CDMA can generate disveriform adversarial examples, even for the same clean example \(x\).
## V Conclusions
In this work, we find that adversarial examples are a particular form of benign examples, i.e., these two types of samples come from two distinct but adjacent distributions that can be transformed from each other with a perfect converter. Based on this observation, we propose a novel hard-label black-box attack, CDMA, which builds a converter to transform clean data to its corresponding adversarial counterpart. Specifically, we leverage a diffusion model to formulate the data converter and synthesize adversarial examples by conditioning on clean images to improve the query efficiency significantly. Extensive experiments demonstrate that CDMA achieves a much higher attack success rate within 1,000 queries and needs fewer queries to achieve the attack results, even in the targeted attack setting. Besides, most adversarial examples generated by CDMA can escape from mainstream defense strategies and maintain high robustness. Furthermore, CDMA can generate adversarial examples that are well transferred to different victim models or datasets.
**Acknowledgements** This work is supported in part by Yunnan Province Education Department Foundation under Grant No.2022j0008, in part by the National Natural Science Foundation of China under Grant 62162067 and 62101480, Research and Application of Object Detection based on Artificial Intelligence, in part by the Yunnan Province expert workstations under Grant 202205AF150145.
|
2307.10521 | Boundary integrated neural networks (BINNs) for acoustic radiation and
scattering | This paper presents a novel approach called the boundary integrated neural
networks (BINNs) for analyzing acoustic radiation and scattering. The method
introduces fundamental solutions of the time-harmonic wave equation to encode
the boundary integral equations (BIEs) within the neural networks, replacing
the conventional use of the governing equation in physics-informed neural
networks (PINNs). This approach offers several advantages. Firstly, the input
data for the neural networks in the BINNs only require the coordinates of
"boundary" collocation points, making it highly suitable for analyzing acoustic
fields in unbounded domains. Secondly, the loss function of the BINNs is not a
composite form, and has a fast convergence. Thirdly, the BINNs achieve
comparable precision to the PINNs using fewer collocation points and hidden
layers/neurons. Finally, the semi-analytic characteristic of the BIEs
contributes to the higher precision of the BINNs. Numerical examples are
presented to demonstrate the performance of the proposed method. | Wenzhen Qu, Yan Gu, Shengdong Zhao, Fajie wang | 2023-07-20T01:45:26Z | http://arxiv.org/abs/2307.10521v1 | # Boundary integrated neural networks (BINNs) for acoustic radiation and scattering
###### Abstract
This paper presents a novel approach called the boundary integrated neural networks (BINNs) for analyzing acoustic radiation and scattering. The method introduces fundamental solutions of the time-harmonic wave equation to encode the boundary integral equations (BIEs) within the neural networks, replacing the conventional use of the governing equation in physics-informed neural networks (PINNs). This approach offers several advantages. Firstly, the input data for the neural networks in the BINNs only require the coordinates of "boundary" collocation points, making it highly suitable for analyzing acoustic fields in unbounded domains. Secondly, the loss function of the BINNs is not a composite form, and has a fast convergence. Thirdly, the BINNs achieve comparable precision to the PINNs using fewer collocation points and hidden layers/neurons. Finally, the semi-analytic characteristic of the BIEs contributes to the higher precision of the BINNs. Numerical examples are presented to demonstrate the performance of the proposed method.
A +
Footnote †: Corresponding author, Email: guyan1913@163.com (Y. Gu)
Acoustic; Semi-analytical; Physics-informed neural networks (PINNs); Boundary integral equations (BIEs); Boundary integral neural networks (BINNs); Unbounded domain.
Introduction
The boundary element method (BEM) has gained recognition as a formidable technique for numerically analyzing acoustic fields, owing to its semi-analytical nature and boundary-only discretization [1, 2]. By incorporating fundamental solutions into the BEM, the time-harmonic wave equation for acoustic problems, along with boundary conditions and the Sommerfeld radiation condition at infinity, can be transformed into boundary integral equations (BIEs) [3]. Consequently, the BEM offers several advantages, including the reduction of problem dimensionality by one and the direct solution of unbounded domain problems without the need for special treatments.
Over the past decade, significant attention has been directed towards machine learning, owing to the remarkable advancements in computing resources and the abundance of available data [4]. Among the prominent tools in machine learning, deep neural networks (DNNs) have emerged as outstanding approximations of functions, demonstrating immense potential for numerical simulations of partial differential equation (PDE) problems [5, 6]. Up to now, numerous DNN-based approaches have been devised to tackle PDEs, including physics-informed neural networks (PINNs) [7-9], the deep Galerkin method (DGM) [10, 11], and the deep Ritz method (DRM) [12, 13]. The aforementioned DNN-based methods directly approximate the solution of problems using a neural network. Subsequently, a loss function or composite form is constructed, incorporating information from the residuals of the partial differential equation (PDE) with boundary/initial conditions or the energy functional form.
There have been remarkable contributions in acoustic numerical analysis through the utilization of DNN-based methods [14, 15]. The DNNs are typically trained and applied in finite domains, which poses challenges when directly using them to solve unbounded domain problems. Very recently, Lin et al. [16] made the first attempt to integrate neural networks with indirect boundary integral equations (BIEs) for solving partial differential equation (PDE) problems with Dirichlet boundary conditions. Following this, Zhang et al. [17] utilized neural networks to approximate
solutions of direct BIEs using non-uniform rational B-splines (NURBS) parameterization of the boundary for potential problems. The aforementioned approaches are theoretically well-suited for addressing problems in unbounded domains. However, they have not been empirically validated by related problems in the references mentioned.
In this paper, we propose a novel approach called the boundary integrated neural networks (BINNs) to analyze acoustic problems in both bounded and unbounded domains. The method involves the approximated solutions of neural networks, trained solely on boundary collocation points, into the direct acoustic boundary integral equations (BIEs) using quadratic elements. The loss function is then constructed based on the BIE residuals and is minimized specifically at these collocation points. Three numerical examples with various types of boundary conditions are provided to validate the proposed method. The numerical results obtained using the developed approach are compared with those obtained using the PINNs as well as the exact solutions.
## II Mathematical formulation for acoustic problem
The time-harmonic wave equation [18], commonly referred to as the Helmholtz equation, can be expressed in 2D domain \(\Omega\) as follows:
\[\nabla^{2}p+k^{2}p=0,\ \ p\in\Omega \tag{1}\]
where \(p\) represents the complex acoustic pressure, while \(k\) denotes the wave number. The wave number, defined as \(\omega/c\), corresponds to the ratio of the angular frequency \(\omega\) to the speed of the acoustic wave \(c\) in the medium \(\Omega\). The equation (1) is subject to Dirichlet and Neumann boundary conditions (BCs) as
\[p(\mathbf{x})=\overline{p}(\mathbf{x}),\ \ \mathbf{x}\in\Gamma_{{}_{D}}, \tag{2}\]
\[q(\mathbf{x})=\frac{\partial\overline{p}(\mathbf{x})}{\partial\mathbf{n}(\mathbf{x})}=\mathrm{ i}\rho\omega\overline{\mathbf{v}}(\mathbf{x}),\ \ \mathbf{x}\in\Gamma_{{}_{N}}, \tag{3}\]
where \(\mathbf{n}(\mathbf{x})\) represents the outward unit normal vector to the boundary \(\Gamma\) at point \(\mathbf{x}\), \(\rho\)
denotes the density of the medium, i means the imaginary unit, \(\nu(\mathbf{x})\) is the normal velocity, and the upper bars on the pressure and normal velocity indicate the known functions. Furthermore, as the distance \(r\) from the source tends to infinity, it is essential for the pressure field to satisfy the Sommerfeld radiation condition as
\[\lim_{r\rightarrow\infty}\sqrt{r}\bigg{(}\frac{\partial p(r)}{\partial r}-\mathrm{ i}kp(r)\bigg{)}=0 \tag{4}\]
## 3 Boundary integrated neural networks (BINNs)
### Boundary integral equations (BIEs)
By incorporating the fundamental solutions, the time-harmonic wave equation for acoustic pressure can be transformed into an integral form [3], represented as
\[p(\mathbf{x})+\int_{\Gamma}F(\mathbf{x},\mathbf{y})p(\mathbf{y})d\Gamma(\mathbf{y})=\int_{\Gamma}G (\mathbf{x},\mathbf{y})q(\mathbf{y})d\Gamma(\mathbf{y}),\ \ \mathbf{x}\in\Omega \tag{5}\]
where \(\mathbf{x}\) and \(\mathbf{y}\) represent the source and field points, respectively, while \(G(\mathbf{x},\mathbf{y})\) and \(F(\mathbf{x},\mathbf{y})\) respectively denote the fundamental solutions of the time-harmonic wave equation and its corresponding normal derivatives. \(G(\mathbf{x},\mathbf{y})\) and \(F(\mathbf{x},\mathbf{y})\) for 2D problems are defined as
\[G(\mathbf{x},\mathbf{y})=\frac{\mathrm{i}}{4}\,H_{0}^{(1)}\,\big{(}k\mathbf{r}(\mathbf{x},\bm {y})\big{)},\,\text{and}\ F(\mathbf{x},\mathbf{y})=\frac{\mathbf{\partial}G(\mathbf{x},\mathbf{y} )}{\mathbf{\partial}n(\mathbf{y})} \tag{6}\]
where \(H_{0}^{(1)}\) represents the first kind Hankel function of order zero, \(r\) is the distance between points \(\mathbf{x}\) and \(\mathbf{y}\). Taking the limit as \(\mathbf{x}\) in Eq. (5) approaches the boundary \(\Gamma\), we obtain
\[C(\mathbf{x})p(\mathbf{x})+\int_{\Gamma}^{\text{CPV}}F(\mathbf{x},\mathbf{y})p(\mathbf{y})d\Gamma (\mathbf{y})=\int_{\Gamma}G(\mathbf{x},\mathbf{y})q(\mathbf{y})d\Gamma(\mathbf{y}),\ \ \mathbf{x}\in\Omega \tag{7}\]
in which \(C(\mathbf{x})=0.5\) as the boundary near point \(\mathbf{x}\) is smooth, and \(\int_{\Gamma}^{\text{CPV}}\) denotes the integral evaluated in the sense of Cauchy principal value (CPV). In this study, regular integrals are computed using the standard Gaussian quadrature with twenty Gaussian points, while the singular integrals are evaluated using a direct method developed by Guiggiani and Casalini [19] for CPV
integrals. It is widely acknowledged that the handling techniques for singular integrals in boundary integral equations (BIEs) have reached a high level of maturity. However, the detailed methods for handling singular integrals are beyond the scope of this work. Interested readers are referred to relevant references for further information.
### Discretization of BIEs
We discretize the BIEs using discontinuous quadratic element [20]. The shape functions, denote as \(N_{i}(\xi)(i=1,2,3)\), of the elements are assumed to have the following forms:
\[N_{1}(\xi)=\frac{\xi(\xi-1)}{2},\quad N_{2}(\xi)=(1-\xi)(1+\xi),\quad\text{ and }N_{3}(\xi)=\frac{\xi(\xi+1)}{2} \tag{8}\]
in which \(\xi\in[-1,1]\) indicates the dimensionless coordinate. Then, the geometry of each quadratic element can be described as
\[\mathbf{y}=N_{1}(\xi)\mathbf{y}_{1}+N_{2}(\xi)\mathbf{y}_{2}+N_{3}(\xi)\mathbf{y}_{3} \tag{9}\]
where \(\mathbf{y}_{1}(\xi=-1)\), \(\mathbf{y}_{2}(\xi=0)\), and \(\mathbf{y}_{3}(\xi=1)\) denote the right, middle, and left points of the mentioned boundary element as shown in Fig. 1, respectively. The pressure and its normal derivative on the boundary element are approximated by quantities \(p_{i},q_{i}(i=1,2,3)\) on points \(\mathbf{y}_{1}^{\prime}(\xi=-\alpha)\), \(\mathbf{y}_{2}^{\prime}(\xi=0)\), and \(\mathbf{y}_{3}^{\prime}(\xi=\alpha)\) in Fig. 1, expressed as follows:
\[p(\mathbf{y}) =N_{1}\bigg{(}\frac{\xi}{\alpha}\bigg{)}p_{1}+N_{2}\bigg{(}\frac {\xi}{\alpha}\bigg{)}p_{2}+N_{3}\bigg{(}\frac{\xi}{\alpha}\bigg{)}p_{3} \tag{10}\] \[q(\mathbf{y}) =N_{1}\bigg{(}\frac{\xi}{\alpha}\bigg{)}q_{1}+N_{2}\bigg{(}\frac {\xi}{\alpha}\bigg{)}q_{2}+N_{3}\bigg{(}\frac{\xi}{\alpha}\bigg{)}q_{3} \tag{11}\]
where \(\alpha\in(0,1)\). In the numerical calculations of this work, the value of \(\alpha\) is set to \(0.8\), and its influence on the numerical results is negligible.
Based on the aforementioned discontinuous quadratic element, the discretized form of the BIE (7) is given as
\[C(\mathbf{x}^{m})p(\mathbf{x}^{m})+\sum_{i=1}^{N}\sum_{j=1}^{3}p_{j}^{i}\int_{-1}^{1}F( \mathbf{x}^{m},\mathbf{y}_{i}(\xi))N_{j}(\xi^{z}/\alpha)J_{i}(\xi)d\xi^{z}=\sum_{i=1}^{N }\sum_{j=1}^{3}q_{j}^{i}\int_{-1}^{1}G(\mathbf{x}^{m},\mathbf{y}_{i}(\xi))N_{j}(\xi^{z}/ \alpha)J_{i}(\xi)d\xi^{z} \tag{12}\]
where \(N\) represents the number of boundary elements, \(\mathbf{x}^{m}(m=1,2,...,3N)\) are boundary collocation points and selected to be the same set as points \(\mathbf{y}_{j}^{\prime}\) (see Fig. 1) on these elements, \(p_{j}^{i}\) and \(q_{j}^{i}\) respectively denote the pressure and its normal derivative at the \(j\)-th collocation point of the \(i\)-th element, and \(J_{i}(\xi)\) represents the Jacobian of transformation from the global coordinate \(\mathbf{y}\) to the dimensionless coordinate \(\xi\) for integrals at the \(i\)-th element.
After discretizing the BIEs through the process mentioned earlier, we can define the following two functions with Eq. (12) to facilitate the construction of the loss function in subsequent steps
\[LE(\mathbf{x}^{m},\mathbf{p})=C(\mathbf{x}^{m})p(\mathbf{x}^{m})+\sum_{i=1}^{N}\sum_{j=1}^{3}p _{j}^{i}\int_{-1}^{1}F(\mathbf{x}^{m},\mathbf{y}_{i}(\xi))N_{j}(\xi^{z}/\alpha)J_{i}( \xi)d\xi^{z} \tag{13}\]
\[RE(\mathbf{x}^{m},\mathbf{q})=\sum_{i=1}^{N}\sum_{j=1}^{3}q_{j}^{i}\int_{-1}^{1}G(\mathbf{ x}^{m},\mathbf{y}_{i}(\xi))N_{j}(\xi^{z}/\alpha)J_{i}(\xi)d\xi^{z} \tag{14}\]
where \(\mathbf{p}=\left\{p_{j}^{i}\right\}_{j=1,2,3}^{i=1,...,N}\) and \(\mathbf{q}=\left\{q_{j}^{i}\right\}_{j=1,2,3}^{i=1,...,N}\)
### Neural networks and loss function of the BINNs
Figure 1: Discontinuous quadratic element
We present the construction of the BINNs by seamlessly integrating neural networks and the BIEs in this subsection. As illustrated in Fig. 2, we utilize a fully connected neural architecture including the input layer, the \(L\) hidden layers, and the output layer. The number of neurons in \(l\) hidden layer is set to \(n_{l}\). Based on the neural networks approximation, the real and imaginary parts of trial solutions of pressures at a collocation point \(x\) can be expressed as
\[\mathrm{Re}\left\{p(\mathbf{x},\mathbf{w},\mathbf{b})\right\}=h_{1}\left(\mathbf{\hat{\lambda}} _{L}\left(\mathbf{\hat{\lambda}}_{L-1}\left(...\left(\mathbf{\hat{\lambda}}_{1}(\mathbf{x} )\right)\right)\right)\right) \tag{15}\]
\[\mathrm{Im}\left\{p(\mathbf{x},\mathbf{w},\mathbf{b})\right\}=h_{2}\left(\mathbf{\hat{\lambda}} _{L}\left(\mathbf{\hat{\lambda}}_{L-1}\left(...\left(\mathbf{\hat{\lambda}}_{1}(\mathbf{x} )\right)\right)\right)\right) \tag{16}\]
where \(h_{k}\left(k=1,2\right)\) and \(\mathbf{\hat{\lambda}}_{l}\left(l=1,2,...,L\right)\) are linear and nonlinear mappings, expressed as follows
\[h_{k}\left(g\right)=\mathbf{w}_{k}^{\prime}*g+\mathbf{b}_{k}^{\prime} \tag{17}\]
\[\mathbf{\hat{\lambda}}_{l}\left(g\right)=\sigma\left(\mathbf{w}_{l}*g+\mathbf{b}_{l}\right) \tag{18}\]
with weights \(\mathbf{w}_{k}^{\prime}\in R^{n_{l}}\), \(\mathbf{w}_{l}^{\prime}\in R^{n_{l}n_{l-1}}\left(n_{0}=2\right)\), biases \(b_{k}^{\prime}\in R,\mathbf{b}_{l}^{\prime}\in R^{n_{l}}\), and the activation function \(\sigma\).
Figure 2: The framework of the boundary integrated neural networks (BINNs).
Here, Table 1 lists some commonly used activation functions. To obtain the normal derivatives of acoustic pressures approximated by the above neural networks, we employ the "dlgradient", which is an automatic differentiation function in the Deep Learning Toolbox of MATLAB.
We construct two different forms of loss functions and will explore their performance in next sections. Firstly, we incorporate the known pressures \(p\) and/or normal derivatives \(q\) directly into the BIEs, creating the following loss function referred to as \(Loss\)
\[Loss=\frac{1}{3N}\sum_{n=1}^{3N}\left(LE(\mathbf{x}^{n},\mathbf{p})-RE(\mathbf{x}^{m},\bm {q})\right)^{2} \tag{19}\]
where the unknown \(p\) and/or \(q\) on the boundary are approximated by the neural networks. For the second form of the loss function, we approximate both the pressures and normal derivatives in the BIEs and boundary constraints using the neural networks. The loss function named as Loss\({}_{BC}\) is then constructed as
\[Loss_{BC}=\frac{1}{3N}\sum_{n=1}^{3N}\left(LE(\mathbf{x}^{m},\mathbf{p})-RE(\mathbf{x}^{m}, \mathbf{q})\right)^{2}+\frac{1}{N_{D}}\sum_{n=1}^{N_{B}}\left(\mathbf{p}_{D}-\overline {\mathbf{p}}_{D}\right)^{2}+\frac{1}{N_{N}}\sum_{n=1}^{N_{N}}\left(\mathbf{q}_{N}- \overline{\mathbf{q}}_{N}\right)^{2} \tag{20}\]
where the subscripts \(D\) and \(N\) respectively denote the Dirichlet and the Neumann BC, \(N_{D}\) and \(N_{N}\) indicate the numbers of Dirichlet and Neumann boundary collocation points respectively, and the superscript bar represents the known quantities.
### Optimization of parameters and solution of pressure at interior point
In the previous subsections, we have established the architecture of the neural networks and
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & \begin{tabular}{c} Arctan \\ \end{tabular} & \begin{tabular}{c} Sigmoid \\ \end{tabular} & \begin{tabular}{c} Swish \\ \end{tabular} & \begin{tabular}{c} Softplus \\ \end{tabular} &
\begin{tabular}{c} Tanh \\ \end{tabular} \\ \hline \hline \end{tabular}
\end{table}
Table 1: Some commonly used activation functions.
defined the loss function for the BINNs. The next step is to optimize the weights and biases of each neuron by minimizing the corresponding loss function, either Eq. (19) or Eq. (20). To accomplish this optimization process, we utilize the powerful and widely used "fmincon" function in MATLAB. The "fmincon" is specifically designed to minimize constrained nonlinear multivariable functions.
By applying this optimization approach, we are able to obtain accurate numerical results for the unknown pressures and normal derivatives along the boundary. Once the pressures \(\ p\\) and normal derivatives \(\ q\\) at the boundary collocation points are determined, we can easily calculate the numerical solution for the pressure at any interior point using Eq. (5).
## 4 Numerical examples
To evaluate the performance of the BINNs, several benchmark examples involving bounded and unbounded domains under various BCs are provided. The accuracy of the present approach is thoroughly investigated by examining the influence of parameters such as the hidden layer number, neuron number in each layer, and the choice of activation function. The numerical results calculated by the BINNs are compared against those obtained using the traditional PINNs as well as the theoretical solutions.
All the MATLAB codes used in this study are executed on a computer equipped with an Intel Core i9-11900F 2.5 GHz CPU and 64 GB of memory. The precision of the numerical results is assessed using relative error, which is defined as
\[\text{Relative error (RE)}=\sqrt{\sum_{i=1}^{M}\left(\tilde{p}_{i}-p_{i} \right)^{2}}\ \Bigg{/}\sqrt{\sum_{i=1}^{M}{p_{i}}^{2}} \tag{21}\]
where \(\ M\\) denote the number of calculated points, \(\tilde{p}_{i}\) and \(p_{i}\) are numerical and analytical solutions at \(i\)-th calculated point, respectively.
4.1. Interior acoustic field
As the first example, we consider the distribution of acoustic pressure in a rectangle domain with a length of \(3\,\mathrm{m}\) and a height of \(1.5\,\mathrm{m}\), as illustrated in Fig. 3. The center of the domain is \((1.5,0.75)\). The boundary is subject to two different cases of BCs.
Case 1: Dirichlet BC
The pressure on the boundary is specified as
\[p(x_{1}^{\prime},x_{2}^{\prime})=\cos(kx_{1}^{\prime})+\mathrm{i}\sin(kx_{2}^{ \prime}),\ \ (x_{1}^{\prime},x_{2}^{\prime})\in\Gamma \tag{22}\]
Obviously, the analytical solution for this case is \(p(x_{1},x_{2})=\cos(kx_{1})+\mathrm{i}\,\sin(kx_{2}),\ \ (x_{1},x_{2})\in\Omega\).
Initially, we assess the performance of the BINNs using two distinct forms of loss functions. Four distinct neural architectures are configured as follows: a) a single hidden layer consisting of 10 neurons; b) a single hidden layer consisting of 20 neurons; c) two hidden layers, each with 10 neurons; d) two hidden layers, each with 20 neurons. The training process for optimization stops when the iteration count reaches 10000. A total of 270 boundary collocation points, corresponding to 90 boundary elements, are utilized. The activation function selected for neural networks is \(\sigma(z)=z\,/\,(1+e^{-z})\). The wave number is set to \(k=2\,\ \mathrm{m}^{-1}\).
Figure 3: The dimension of the rectangle domain and the BCs of case 2.
Using the BINNs with \(\mathit{Loss}\) and \(\mathit{Loss}_{\mathit{BC}}\), Table 2 presents the relative errors of the real and imaginary components of the pressure along the evaluated line \(x_{2}=0.75\) m with 30 equally spaced points for calculation purposes. The numerical results obtained through the use of \(\mathit{Loss}\) showcase superior accuracy when compared to the results obtained using \(\mathit{Loss}_{\mathit{BC}}\). Remarkably, even using the networks with a single hidden layer consisting of 10 neurons, the present method with \(\mathit{Loss}\) achieves high accuracy in the numerical results. Additionally, there is a slight improvement when employing more hidden layers or increasing the number of neurons in each layer. In contrast, the BINNs with \(\mathit{Loss}_{\mathit{BC}}\) requires a greater number of hidden layers and neurons to attain sufficiently accurate results.
Fig. 4 illustrates the convergence process of two designated loss functions \(\mathit{Loss}\) and \(\mathit{Loss}_{\mathit{BC}}\) over iterations ranging from 1 to 10000, with values recorded at every 100 iterations. It is apparent that \(\mathit{Loss}\) exhibits a faster convergence rate compared to \(\mathit{Loss}_{\mathit{BC}}\). Therefore, the BINNs with \(\mathit{Loss}\) has a better performance in comparison to that with \(\mathit{Loss}_{\mathit{BC}}\), as indicated in Table 2. To expedite the convergence process of the loss function \(\mathit{Loss}_{\mathit{BC}}\), the incorporation of additional learning techniques is necessary to balance its different loss terms. Consequently, \(\mathit{Loss}\) stands as the superior choice for an efficient loss function in the context of BINNs when compared to \(\mathit{Loss}_{\mathit{BC}}\). Henceforth, the BINNs will employ \(\mathit{Loss}\) in all subsequent computational processes unless otherwise specified.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \multirow{2}{*}{Error} & \multicolumn{4}{c}{\(\mathit{Loss}\)} & \multicolumn{4}{c}{\(\mathit{Loss}_{\mathit{BC}}\)} & \multicolumn{4}{c}{\(\mathit{Loss}_{\mathit{BC}}\)} \\ \cline{2-9} & a & b & c & d & a & b & c & d \\ \hline Re\{\(p\)\(\}\) & 2.38E-06 & 4.66E-07 & 5.74E-07 & 1.38E-07 & 2.07E-02 & 2.77E-03 & 4.09E-05 & 3.85E-05 \\ \hline Im\{\(p\)\(\}\) & 6.43E-07 & 7.30E-08 & 2.88E-07 & 9.12E-08 & 3.61E-03 & 7.41E-04 & 8.96E-05 & 5.62E-05 \\ \hline \end{tabular}
\end{table}
Table 2: Errors of pressures by the BINNs with \(\mathit{Loss}\) and \(\mathit{Loss}_{\mathit{BC}}\) based on four neural networks
Next, we present a comparison of the accuracy of the numerical results obtained using the BINNs and the traditional PINNs. The same calculated points are distributed on the line \(x_{2}=0.75\) m. The wave number, activation functions of the neural networks, and optimization stopping criteria for both methods remain consistent with the previous settings. The BINNs adopts a single hidden layer comprising 20 neurons, while the PINNs utilizes two different networks: a) a single hidden layer with 20 neurons, and b) three hidden layers, each containing 20 neurons. The collocation points for the PINNs are uniformly distributed within the rectangular domain and its boundary, while for the BINNs, they are only placed on the boundary. Fig. 5 and Fig. 6 plot the convergence curves of the pressures obtained by the BINNs and the PINNs as the number of collocation points increases. Clearly, the BINNs exhibits a faster and more stable convergence rate compared to the PINNs with networks "a" or "b". Furthermore, the precision of the pressures evaluated by the BINNs is higher even with a smaller number of collocation points, as compared to the PINNs. Therefore, in order to achieve comparable precision in pressure calculations, the BINNs necessitates significantly fewer collocation points and hidden layers/neurons compared to the PINNs. This observation also indicates that the PINNs are more stable than the PINNs.
Figure 4: Convergence process of loss functions constructed with different neural architectures
demonstrates that the BINNs exhibits higher computational efficiency in comparison to the PINNs.
Case 2: Mixed BCs
The mixed BCs are taken into account in this particular case. As depicted in Fig. 3, the left, upper and lower boundaries of the domain are assumed to be rigid, while the right boundary is subjected to a specific condition as
\[p(3,x_{2}^{\prime})=\sin x_{2}^{\prime}+\mathrm{i}\cos x_{2}^{\prime},(3,x_{2} ^{\prime})\in\Gamma \tag{23}\]
The analytical solution for the case is not available.
The wave number is assumed to be \(k=2\) m\({}^{-1}\). Both the BINNs and the PINNs are employed for the numerical simulation of this case to make a comparison. The activation functions of the neural networks remain the same as in case 1, and the training process for optimization stops after 10000 iterations. The BINNs uses 288 collocation points and a single hidden layer with 20 neurons, while the PINNs uses 1624 collocation points and three hidden layer, each with 25 neurons. Fig. 7 displays the numerical results of the pressures in the entire computational domain. As observed
from the figure, the numerical results obtained by the BINNs show good agreement with those calculated by the PINNs.
### Acoustic radiation of an infinite pulsating cylinder
The second example focuses on the analysis of acoustic radiation from an infinite pulsating cylinder. The cylinder has a radius of \(R=1\,\mathrm{m}\) and its center is located at \((0,0)\). The boundary of the structure has a normal velocity amplitude of \(\overline{v}=1\,\mathrm{m}\,/\,\mathrm{s}\). The analytical solution for the pressure can be determined as
\[p(r)=\mathrm{i}\rho c\overline{v}\,\frac{H_{i}^{1}(kr)}{H_{1}^{1}(kR)},\quad r \geq R. \tag{24}\]
where \(H_{i}^{1}(i=0,1)\) denote the \(i\)-th order Hankel function of the first kind. The medium for the propagation of acoustic waves is assumed to be air, with a density of \(\rho=1.2\,\mathrm{kg}\,/\,\mathrm{m}^{3}\) and a wave speed of \(c=341\,\mathrm{m}\,/\,\mathrm{s}\).
Figure 7: Numerical results of pressures in the rectangle domain: a) real component (the BINNs); b) imaginary component (the BINNs); c) real component (the PINNs); d) imaginary component (the PINNs)
In this simulation, the wave number \(k=1\,\mathrm{m}^{-1}\) is selected. The BINNs employs neural networks consisting of two hidden layers, each comprising 10 neurons. The training process for optimization terminates after 2000 iterations. The present approach utilizes 150 collocation points on the boundary. The chosen activation function is "Swish", as specified in Table 1. Calculated points are distributed within a domain \(\left\{(x_{1},x_{2})\left|\sqrt{x_{1}^{2}+x_{2}^{2}}>1,-5<x_{1},x_{2}<-5\right\}\right\}\). Fig. 8 presents the contour plots of relative errors for the real and imaginary components of pressures at the calculated points, as evaluated by the BINNs. It is evident that the present approach yields satisfactory numerical results.
Maintaining the previous settings unaltered, we proceed to validate the impact of various activation functions listed in Table 1 on the developed method. Table 3 shows the numerical errors of pressures in domain \(\left\{(x_{1},x_{2})\left|\sqrt{x_{1}^{2}+x_{2}^{2}}>1,-5<x_{1},x_{2}<-5\right\}\right\}\), along with the CPU time and the final values of \(Loss\), obtained using the BINNs with different activation functions. From the table, it indicates that the choice of activation functions has minimal effect on the precision, convergence process of the loss function, and the efficiency of the BINNs.
Figure 8: Numerical results of pressures calculated by the BINNs
### Acoustic scattering of an infinite rigid cylinder
As the last numerical example, we consider an acoustic scattering phenomenon. A plane incident wave, with an amplitude of unity, travels along the positive \(x\)-axis and impinges on an infinite rigid cylinder centered at point (0, 0) with a radius of \(R=1\) m. The analytical solution of scattering field
\[p(r,\theta)=-\sum_{n=0}^{\infty}\varepsilon_{n}\mathbf{i}^{\frac{n}{2}}\frac{J _{n}^{\prime}(kR)}{H_{n}^{1\prime}(kR)}H_{n}^{1}(kr)\cos(n\theta),\,\,\,r\geq R \tag{25}\]
where \(J_{n}\) denotes the \(n\)-th order Bessel function, \(H_{n}^{1}\) represents the \(n\)-th order Hankel function of the first kind, \(\theta=0\) along the positive \(x\)-axis, and \(\varepsilon_{n}\) is the Neumann symbol expressed as
\[\varepsilon_{n}=\begin{cases}=1,&n=0,\\ =2,&n\geq 1.\end{cases} \tag{26}\]
A neural network with a configuration of two hidden layers, each consisting of 20 neurons, is utilized for the numerical implementation of the BINNs. The wave number is set to \(k=0.5\) m\({}^{-1}\), and a total of 90 collocation points are distributed on the boundary. The activation function is set to \(\sigma(z)=z\left/\left(1+e^{-z}\right)\right.\). Two loss functions, specifically Eq. (19) and Eq. (20), are reconsidered and incorporated into the BINNs for analyzing acoustic fields in unbounded domains. Fig. 9 depicts the convergence behavior of two designated loss functions, namely \(Loss\) and \(Loss_{BC}\), as the iterations progress from 1 to 2000, with measurements taken every 50 iterations. Once again, it is demonstrated that \(Loss\) has a better convergence performance when compared to \(Loss_{BC}\).
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Activation functions & Arctan & Sigmoid & Swish & Softplus & Tanh \\ \hline Error of \(\text{ Re}\left\{p\right\}\) & 1.10E-06 & 5.44E-06 & 9.97E-07 & 4.59E-07 & 1.02E-06 \\ \hline Error of \(\text{ Im}\left\{p\right\}\) & 1.20E-06 & 6.72E-06 & 1.02E-06 & 6.67E-07 & 2.19E-06 \\ \hline Final value of \(Loss\) & 7.15E-10 & 1.47E-09 & 4.87E-10 & 8.95E-11 & 2.95E-09 \\ \hline CPU time (s) & 21.6 & 23.5 & 21.9 & 22.0 & 21.8 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Impact of various activation functions on the BINNs
The number of collocation points on the boundary is adjusted to 300, and the training process for optimization is conducted over 5000 iterations. All other settings remain unchanged from the previous configuration. Employing the BINNs with _Loss_, Fig. 10 displays the relative errors of pressures in domain \(\left\{(x_{1},x_{2})\middle|1<\sqrt{x_{1}^{2}+x_{2}^{2}}<2\right\}\) across varying wave numbers ranging from \(0.5\:\mathrm{m}^{-1}\) to \(10\:\mathrm{m}^{-1}\). As we can observe in Fig. 10, the developed method obtains the accurate numerical results for different wave numbers. Fig. 11 presents the relative errors of pressures at all calculated points within domain \(\left\{(x_{1},x_{2})\middle|1<\sqrt{x_{1}^{2}+x_{2}^{2}}<2\right\}\), considering a wave number of \(k=5\:\mathrm{m}^{-1}\). It can be observed that maximum relative error of both the real and imaginary parts of pressures at these calculated points is below 5E-003.
These numerical results obtained using the BINNs further illustrate the competitiveness of the proposed method in simulating acoustic fields in unbounded domains, surpassing the traditional PINNs.
## 5 Conclusions
The BINNs is proposed in this paper as a numerical approach for analyzing acoustic fields in both bounded and unbounded domains. Unlike the traditional PINNs that combines the governing equation with neural architectures, the proposed method integrates the BIEs and neural networks. Through numerical experiments on various benchmark examples, the BINNs exhibits high accuracy and rapid convergence. Several notable advantages of the BINNs over the traditional PINNs in the context of acoustic radiation and scattering can be summarized as follows:
1) The BINNs only require the coordinates of "boundary" collocation points as input data for the neural networks. The benefit of this is that the method is particularly well-suited for numerical simulations of problems in unbounded domains.
2) The loss function in the BINNs, as defined in Eq. (19), is not a composite form. Therefore, there is no need to consider special techniques to balance the influence between different terms, as described in Eq. (20) or the loss function used in the PINNs. The numerical results also demonstrate the fast convergence of the loss function.
Figure 11: Numerical errors of pressures calculated by the BINNs for \(k=5\) m\({}^{-1}\)
3) To achieve comparable precision in pressure calculations, the BINNs requires significantly fewer collocation points and hidden layers/neurons compared to the PINNs. As a result, the BINNs exhibits higher computational efficiency.
4) The BINNs has higher precision attributed to the semi-analytic characteristic of the BIEs, as evident from the numerical errors of acoustic pressures obtained using this method.
The present approach is introduced to address relatively simple acoustic problems, and several conclusions are summarized. In the future, we aim to extend the application of BINNs to structural-acoustic sensitivity analysis.
## Acknowledgements
The work described in this paper was supported by the Natural Science Foundation of Shandong Province of China (Grant No. ZR2022YQ06), the Development Plan of Youth Innovation Team in Colleges and Universities of Shandong Province (Grant No. 2022KJ140), the National Natural Science Foundation of China (Grant No. 11802165), and the China Postdoctoral Science Foundation (Grant No. 2019M650158).
|
2305.05966 | Graph Neural Networks and 3-Dimensional Topology | We test the efficiency of applying Geometric Deep Learning to the problems in
low-dimensional topology in a certain simple setting. Specifically, we consider
the class of 3-manifolds described by plumbing graphs and use Graph Neural
Networks (GNN) for the problem of deciding whether a pair of graphs give
homeomorphic 3-manifolds. We use supervised learning to train a GNN that
provides the answer to such a question with high accuracy. Moreover, we
consider reinforcement learning by a GNN to find a sequence of Neumann moves
that relates the pair of graphs if the answer is positive. The setting can be
understood as a toy model of the problem of deciding whether a pair of Kirby
diagrams give diffeomorphic 3- or 4-manifolds. | Pavel Putrov, Song Jin Ri | 2023-05-10T08:18:10Z | http://arxiv.org/abs/2305.05966v2 | # Graph Neural Networks and 3-Dimensional Topology
###### Abstract
We test the efficiency of applying Geometric Deep Learning to the problems in low-dimensional topology in a certain simple setting. Specifically, we consider the class of 3-manifolds described by plumbing graphs and use Graph Neural Networks (GNN) for the problem of deciding whether a pair of graphs give homeomorphic 3-manifolds. We use supervised learning to train a GNN that provides the answer to such a question with high accuracy. Moreover, we consider reinforcement learning by a GNN to find a sequence of Neumann moves that relates the pair of graphs if the answer is positive. The setting can be understood as a toy model of the problem of deciding whether a pair of Kirby diagrams give diffeomorphic 3- or 4-manifolds.
## 1 Introduction and Summary
Geometric Deep Learning (GDL) [1] is an area of Machine Learning (ML) that has been under very active development during the last few years. It combines various approaches to ML problems involving data that has some underlying geometric structure. The neural networks used in GDL are designed to naturally take into account the symmetries and the locality of the data. It has been successfully applied to problems involving computer vision, molecule properties, social or citation networks, particle physics, etc (see [2] for a survey). It is natural to apply GDL techniques also to mathematical problems in topology. In general, ML has been already used in various problems in low-dimensional topology, knot theory in particular, [3, 4, 5, 6, 7, 8, 9, 10, 11, 12], as well as various physics-related problems in geometry (for a recent survey see [13]). However, the used neural network models were mostly not specific to GDL.
The goal of this paper is to test the efficiency of GDL in a very simple setting in low-dimensional topology. Namely, we consider a special class of 3-manifolds known as plumbed, or graph, 3-manifolds. Those are 3-manifolds that are specified by a choice of a graph with particular features assigned to edges and vertices. Such 3-manifolds are therefore very well suited for analysis by Graph Neural Networks (GNN). GNN is one of the most important and used types of neural networks used in GDL. In general, GNN are designed to process data represented by graphs.
In this paper, we use GNNs for the following problems involving plumbed 3-manifolds. Different (meaning not isomorphic) graphs can correspond to equivalent, i.e. homeomorphic, 3-manifolds. Note that in 3 dimensions (or less) any topological manifold has a unique smooth structure and the notions of homeomorphism and diffeomorphism are equivalent. It is known that a pair of graphs that produce two equivalent 3-manifolds must be related by a sequence of certain _moves_, commonly known as Neumann moves [14]. These moves establish a certain equivalence relation on the graphs (in addition to the standard graph isomorphism). First, we consider a neural network that, as the input has a pair of plumbing graphs, and, as the output gives the decision whether the graphs correspond to homeomorphic 3-manifolds or not, i.e. whether the two graphs are equivalent, or not, in the sense described above. Supervised Learning (SL) is then used to train the network. The training dataset consists of randomly generated graph pairs, for which it is known whether the corresponding 3-manifolds are homeomorphic or not. The trained neural network, up until the very last layer, can be understood to produce an approximate topological invariant of plumbed 3-manifolds.
Second, we consider a neural network for which the input is a plumbing graph and the output is a sequence of Neumann moves that "simplifies" the graph according to a certain criterion. The aim is to build a neural network such that if it is applied to equivalent graphs it simplifies them to the same graph. If the result is successful this can be used to provide an explicit demonstration that a given pair of
For both cases, SL and RL, we consider different architectures of the neural networks and compare their performance.
Note that in principle there is an algorithm for determining whether two plumbing graphs give homeomorphic 3-manifolds or not, which was already presented in [14]. It involves bringing both graphs to a certain normal form (which is, in a sense, similar to the "simplification" process in the RL setup mentioned above) and then checking that normal forms are the same (i.e. isomorphic graphs). However, it is known that just checking isomorphism of graphs already goes beyond polynomial time. The plumbing graphs can be considered as a particular class of more general Kirby diagrams that can be used to describe arbitrary closed oriented 3-manifolds, with Neumann moves being generalized to the so-called Kirby moves. Even in this case, in principle there exists an algorithm of checking whether two Kirby diagrams produce homeomorphic 3-manifolds or not [15]. There is a also of version of Kirby diagrams and moves for smooth 4-manifolds. Moreover, in this case, however, an algorithm for the recognition of diffeomorphic pairs does not exist. In 4 dimensions the notion of diffeomorphism and homeomorphism are not the same. In particular, there exist pairs of manifolds that are homeomorphic but not diffeomorphic. While the classification of 4-manifolds up to homeomorphisms (with certain assumptions on the fundamental group) is relatively not difficult, classification up to diffeomorphisms is an important open question. The setup with plumbed 3-manifolds that we consider in this paper can be understood as a toy model for the problem of recognition of diffeomorphic pairs of general 3- and 4-manifolds, for which one can try to apply neural networks with similar architecture in the future.
The rest of the paper is organized as follows. In Section 2 we review basic preliminaries about plumbed 3-manifolds and Graph Neural Networks needed for the analysis that follows. In Section 3 we consider various GNN architectures for supervised learning of whether a pair of plumbing graphs provide homeomorphic 3-manifolds or not. In Section 4 we consider reinforcement learning of the process of simplification of a plumbing graph representing a fixed (up to a homeomorphism) 3-manifold. Finally, we conclude with Section 5 where we discuss the obtained results and mention possible further directions. The Appendix A contains some basic algorithms that are specific to the problems considered in this paper.
## 2 Preliminaries
### Plumbed 3-manifolds
In this section we review basic facts about plumbed 3-manifolds, also known as graph 3-manifolds. For a more detailed exposition we refer to the original paper [14]. First, let us describe how to build a 3-manifold from a _plumbing graph_, or simply a _plumbing_. For convenience, we restrict ourselves to the case when the graph is a tree, i.e., the graph is connected and acyclic. We will also consider the case of genus zero plumbings only. In this setting, apart from the graph itself, the only additional information that one needs to specify is the set of integer _weights_\(w(v)\in\mathbb{Z}\) labeling vertices \(v\in V\) (\(V\) denotes set of all vertices of the graph), also referred to as _framings_ in the context of topology. A typical plumbing graph looks like the one shown in Figure 1. The weights \(w(v)\), together with standard graph data can be naturally encoded in an \(|V|\times|V|\) matrix \(a\) with integral elements \(a_{ij}\) as follows. Outside of the diagonal this matrix coincides with the standard adjacency matrix of the graph (i.e. \(a_{ij}=1\) if \(i\neq j\in V\) are connected by an edge, and \(a_{ij}=0\) otherwise). The diagonal elements are given by the weights: \(a_{ii}=w(i)\).
One can build a 3-manifold corresponding to such a plumbing graph as follows. First, consider a graph containing a single vertex with weight \(p\in\mathbb{Z}\) and no edges. To such a one-vertex graph we associate lens space 3-manifold \(L(|p|,\pm 1)\), where the sign of \(\pm 1\) coincides with the sign of \(p\). It can be described as a
Figure 1: An example of a plumbing graph.
quotient of the standard unit 3-sphere \(S^{3}=\{\left|z_{1}\right|^{2}+\left|z_{2}\right|^{2}=1\left|\left(z_{1},z_{2} \right)\in\mathbb{C}^{2}\right\}\subset\mathbb{R}^{4}\cong\mathbb{C}^{2}\) with respect to the action of cyclic group \(\mathbb{Z}_{\left|p\right|}\) or order \(\left|p\right|\), generated by \(\left(z_{1},z_{2}\right)\rightarrow\left(z_{1}e^{\frac{2\pi i}{\left|p\right|} },z_{2}e^{\frac{2\pi i}{\left|p\right|}}\right)\) transformation. This 3-manifold can be equivalently understood as a circle fibration over \(S^{2}\) base with Euler number \(p\). More explicitly, it can be constructed as follows. Let us start with two copies of \(D^{2}\times S^{1}\) (where \(D^{2}\) denotes 2-dimensional disk), that can be viewed as trivial circle fibrations over \(D^{2}\). We then can glue two \(D^{2}\)'s along the common boundary \(\partial D^{2}\cong S^{1}\) into \(S^{2}\) (so that each \(D^{2}\) can be understood as a hemisphere), with the \(S^{1}\) fibers along the two boundaries being glued with relative rotation specified by a certain map \(f:\partial D^{2}\cong S^{1}\to SO(2)\cong S^{1}\). The homotopy class of such a map is completely determined by the "winding number". The homeomorphism class of the resulting closed 3-manifold only depends on this number. To obtain the lens space \(L(p,1)\) one takes the winding number to be \(p\).
Next, consider a vertex with weight \(p\) being a part of a general tree plumbing (as, for example, the one shown in Figure 1). For each edge coming out of the vertex, we remove a single \(S^{1}\) fiber (over some generic point in the \(S^{2}\) base) of the fibration together with its tubular neighborhood. The neighborhood can be chosen to be the restriction of the fibration to a small disk in the \(S^{2}\) base that contains the chosen point. Such an operation, out of the original lens space \(L(p,1)\), produces a 3-manifold that has a boundary component \(\partial D^{2}\times S^{1}\cong S^{1}\times S^{1}=T^{2}\) for each edge coming out of the vertex. Having an edge between a pair of vertices in the graph then corresponds to gluing two \(T^{2}\) boundary components in the way that the two circles, the fiber \(S^{1}\) and the boundary \(S^{1}\) of the small disk on the base, are swapped (with the orientation of one of the circles reversed, so that the resulting 3-manifold is orientable). Performing such operations to all the vertices and edges of the graph one obtains a 3-manifold that has no boundary components. This is the 3-manifold that one associates to the plumbing graph.
Equivalently, to a plumbing graph one can associate Dehn surgery diagram, in the way that each vertex \(v\in V\) corresponds to an unknot framed by \(w(v)\) and the presence of an edge between two vertices signifies that the corresponding unknots form a Hopf link.
Applying the prescription described above to different graphs may result in homeomorphic 3-manifolds. In [14] it was proved that this happens if and only if the graphs can be related by a sequence of local graph transformations, or _moves_, now commonly known as _Neumann moves_, shown in Figure 2.
### Graph neural networks
Here we provide a brief review on some of GNNs for a later purpose. There are 3 main computational modules to build a typical GNN architecture: propagation modules, sampling modules and pooling modules. Since in this paper we will use only convolution operators, which are one of the most frequently used propagation modules, we focus on some of convolution operators. For a broad review on various modules, we refer the reader to [16].
Convolution operators are motivated by convolutional neural networks (CNN), which have achieved a notable progress in various areas. In general, the role of convolution operators can be described as
\[\mathbf{x}_{i}^{(k)}=\gamma^{(k)}\left(\mathbf{x}_{i}^{(k-1)},\bigoplus_{j\in \mathcal{N}(i)}\phi^{(k)}\left(\mathbf{x}_{i}^{(k-1)},\mathbf{x}_{j}^{(k-1)}, \mathbf{e}_{j,i}\right)\right),\]
where \(\mathbf{x}_{i}^{(k)}\in\mathbb{R}^{F}\) denotes node features of node \(i\) in the \(k\)-th layer and \(\mathbf{e}_{j,i}\in\mathbb{R}^{D}\) denotes edge features
Figure 2: There are 3 different types of Neumann moves, which preserve the resulting 3-manifold up to homeomorphism. Considering the sign, one can count 8 Neumann moves. Among them, there are 5 blow-up moves (by which a new vertex is created) and 3 blow-down moves (by which one or two vertices are annihilated).
of the edge connecting from node \(j\) to node \(i\). We also note that \(\bigoplus\) over a neighborhood \(\mathcal{N}(i)\) of node \(i\) is a differentiable, permutation invariant function such as sum, mean and max, and \(\gamma\) and \(\phi\) denote differentiable functions such as Multi Layer Perceptrons (MLPs).
Among various convolution operators existing in the literature, the following will appear in the next sections.
* Graph Embedding Network (GEN) [17] GEN is designed for deep graph similarity learning and embeds each graph into a vector, called a graph embedding. More explicitly, it first computes initial node embeddings \(\mathbf{x}_{i}^{(1)}\) from the node features \(\mathbf{x}_{i}^{(0)}\) through MLP \[\mathbf{x}_{i}^{(1)}=\text{MLP}\left(\mathbf{x}_{i}^{(0)}\right),\] then it executes the single message propagation to compute node embeddings \(\mathbf{x}_{i}^{(2)}\) by the information in its local neighbourhood \(\mathcal{N}(i)\)1 Footnote 1: It is also possible to apply a finite number of propagation process iteratively, but we will only consider single propagation here. \[\mathbf{x}_{i}^{(2)}=\text{MLP}\left(\mathbf{x}_{i}^{(1)},\sum_{j\in\mathcal{N }(i)}\text{MLP}\left(\mathbf{x}_{i}^{(1)},\mathbf{x}_{j}^{(1)}\right)\right).\] Once the node embeddings \(\mathbf{x}_{i}^{(2)}\) are computed, an aggregator computes a graph embedding by aggregating the set of node embeddings. In Section 3.1, we describe the details of the aggregator which we will apply not only to GEN but also the other models GCN and GAT.
* Graph Convolutional Network (GCN) GCN is introduced in [18] as a variant of convolutional neural networks for graphs. It operates as the following formula: \[\mathbf{z}_{i}=\boldsymbol{\Theta}^{\intercal}\sum_{j\in\mathcal{N}(i)\cup \{i\}}\frac{1}{\sqrt{\hat{d}_{j}\hat{d}_{i}}}\mathbf{x}_{j},\] where \(\mathbf{z}_{i}\) is the output for the \(i\)-th node, \(\boldsymbol{\Theta}\) is a matrix of filter parameters, and \(\hat{d}_{i}\) is the degree of \(i\)-th node.
* Graph Attention Network (GAT) GAT is proposed in [19], which incorporates the attention mechanism into the message propagation. The mechanism of GAT can be formulated as \[\mathbf{x}_{i}^{\prime}=\alpha_{i,i}\boldsymbol{\Theta}\mathbf{x}_{i}+\sum_{ j\in\mathcal{N}(i)}\alpha_{i,j}\boldsymbol{\Theta}\mathbf{x}_{j}.\] Here the attention coefficients \(\alpha\) are given by \[\alpha_{i,j}=\frac{\exp\left(\text{LeakyReLU}(\mathbf{a}^{\intercal}[ \boldsymbol{\Theta}\mathbf{x}_{i}\|\boldsymbol{\Theta}\mathbf{x}_{j})]\right)} {\sum_{k\in\mathcal{N}(i)\cup\{i\}}\exp\left(\text{LeakyReLU}(\mathbf{a}^{ \intercal}[\boldsymbol{\Theta}\mathbf{x}_{i}\|\boldsymbol{\Theta}\mathbf{x}_{k })]\right)},\] where the attention mechanism \(\mathbf{a}\) is implemented by a single-layer feedoward neural network, and \(\|\) is the concatenation operator.
All the neural networks including GNNs are implemented based on PyTorch [20] and PyTorch Geometric [21]. 2
Footnote 2: Python code is available on Github.
## 3 Supervised Learning
In this section we use supervised learning to decide whether or not two plumbing graphs represent a same plumbed 3-manifold. We build 3 models GEN+GAT, GCN+GCN and GCN+GAT and examine their performance for the task.3
### Models
All the models are designed to have two convolution operators, one aggregation layer and one classification layer. The models are named by concatenating the names of two convolution operators. For a fair comparison, we use the common aggregation layer and the classification layer, and all the layers have the same dimensions for both input and output.
Since we have already reviewed the convolution operators in Section 2.2, let us now elaborate on the common aggregation layer and classification layer. The aggregator computes a graph embedding by aggregating all of its node embeddings, passed from convolution operators. We use the aggregation layer proposed in [22], which is formulated by
\[\mathbf{h}_{G}=\text{MLP}_{G}\left(\sum_{i\in V}\text{Softmax}(\text{MLP}_{ \text{gate}}(\mathbf{x}_{i}))\odot\text{MLP}(\mathbf{x}_{i})\right),\]
where \(\mathbf{h}_{G}\) is a graph-level output and \(\odot\) denotes element-wise multiplication.
The classification layer plays a role to determine, for a given pair of plumbing graphs, whether or not they are equivalent. This layer has the concatenation of two graph embeddings as its input and classifies into two classes, class 0 and class 1. Here class 1 means two plumbing graphs are equivalent while class 0 denotes they are inequivalent. We implement the classification layer by using MLP with two hidden layers.
A detailed information of the architecture for 3 models are presented in Table 1. For each layer in the table, the first element in the bracket followed by a name of model denotes the dimension of input vectors of the layer while the second one denotes the dimension of output embedding.
### Experimental Settings
For training and validation, we put together datasets including 80,000 random pairs of plumbings generated by algorithms presented in Appendix A. More explicitly, the datasets consists of
* 40,000 pairs of equivalent plumbings generated by EquivPair, Algorithm 3, with \(N_{\max}=40\). To generate a pair of equivalent plumbings, the algorithm starts with a random plumbing created by RandomPlumbing, Algorithm 1, and iteratively applies Neumann moves using RandomNeumannMove, Algorithm 2, up to \(N_{\max}\) times, to each plumbing in the pair.
* 30,000 pairs of inequivalent plumbings generated by InequPair, Algorithm 4, with \(N_{\max}=40\). It has a similar process to EquivPair, but it starts with a pair of inequivalent plumbings, each of which is separately generated by RandomPlumbing.4 Footnote 4: We note that two plumbings, generated by running RandomPlumbing twice, could be accidentally equivalent and this might affect the accuracy of models in training. However, we will ignore this since it is statistically insignificant.
* 10,000 pairs of inequivalent plumbings generated by TweaxPair, Algorithm 5, with \(N_{\max}=40\). This algorithm generates a pair of inequivalent plumbings, one of which is obtained by tweaking the other. Here, by tweaking a plumbing, we mean that we make a small change of the weight (or node feature) of a randomly chosen node in the plumbing. Since tweaking is different from Neumann moves, this process creates an inequivalent plumbing to the original one. After tweaking, it also applies RandomNeumannMove iteratively up to \(N_{\max}\). These pairs are added into the datasets in order for the models to make the decision boundary more accurate, since for a pair generated by InequPair, two plumbings might be quiet different due to random generators.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Layers & GEN+GAT & GCN+GAT & GCN+GCN \\ \hline First convolution & GEN(1, 128) & GCN(1, 128) \\ \hline Second convolution & GAT(128, 128) & GCN(128, 128) \\ \hline Aggregation & Aggregator(128, 32) \\ \hline Classification & MLP(64, 2) \\ \hline \end{tabular}
\end{table}
Table 1: The architecture of 3 models with parameter values.
We divide the datasets into training and validation sets by the ratio 8:2. We train our models on training sets containing 64,000 pairs of plumbings up to 150 epochs. For each model, we use cross-entropy loss for a loss function and Adam for an optimizer with the learning rate 0.001.
### Results
The comparison of the performance between 3 models is plotted in Figure 3. We find that GEN+GAT model significantly outperforms the other models GCN+GAT and GCN+GCN. The model GCN+GAT seems to outperform GCN+GCN by few percent, but the performance difference is negligible. 5
Footnote 5: For GCN+GAT and GCN+GCN models, we have checked that increasing weight dimensions and longer training phases did not lead to better performance.
We have also tried other models such as GEN+GEN, GEN+GCN and GAT+GAT to figure out which convolution operators has an important role. The model GEN+GCN shows similar performance with GEN+GAT, but slightly underperforms, and the performance of GAT+GAT is somewhere between that of GEN+GAT and GCN+GAT. This means that GEN plays a significant role to evaluate equivalence or inequivalence for a pair of plumbing graphs. However, we found that GEN+GEN does not perform as good as GEN+GAT or GEN+GCN.
We used the following datasets to test our models:
* Test set 1 It contains 5,000 pairs of equivalent plumbing graphs generated by EquivPair with \(N_{\max}=40\) and 5,000 pairs of inequivalent plumbings generated by InequivPair with \(N_{\max}=40\).
* Test set 2 This dataset is similar to Test set 1, but with \(N_{\max}=60\).
* Test set 3 This set is also similar to Test set 1, but with \(N_{\max}=80\).
* Test set 4 It contains 64 pairs of plumbings generated in a manual way such that, for each pair, the determinants of adjacency matrices (with weights on the diagonal) of two plumbings are the same. We use this Test set in order to check that graph embeddings from the models are not just functions of the determinant of the adjacency matrix of a plumbing. All types of Neumann moves have the property that it preserves the determinant of the adjacency matrix of the plumbing, which is the order of the first homology group of the corresponding 3-manifold. We wish graph embeddings to not depend on the determinant only, but be more sophisticated (approximate) invariants of plumbed 3-manifolds.
The results are depicted in Figure 4 and they enlighten us with the following two points. The first point is that the accuracy for Test set 2 and Test set 3 is almost the same level as Test set 1 even though Test set 2 and 3 contain plumbing pairs with larger \(N_{\max}\) than Test set 1. It is perhaps surprising that
Figure 3: Overview of the performance and loss comparison between GEN+GAT, GCN+GAT and GCN+GCN models.
such somewhat counter-intuitive property holds even for GCN+GAT and GCN+GCN models, which show less training accuracy than GEN+GAT. The second point is that GEN+GAT model still outperforms the others for Test set 4 and it can distinguish correctly even inequivalent pairs with the same determinants. Since GEN is designed for graph similarity learning and to have a good generalization, we can see that the model GEN+GAT outperforms significantly the others GCN+GCN and GCN+GAT, designed for general classification problems (with a relatively small number of classes), on various Test sets.
## 4 Reinforcement Learning
In this section, we consider reinforcement learning of a neural network that allows, for a given pair of plumbings, not only to recognize whether they are equivalent or not, but also to find out their simplest representations.
### The environment
#### 4.1.1 State space
In our RL environment, the plumbing graph defines the state and the state space is infinity. In order to handle the start state and terminal stats in an easy way, we set the start state for an episode is set to be a plumbing generated by RandomPlumbing, Algorithm 1, with number of nodes equal to 10, then applying Neumann moves \(N=15\) times.
Between two equivalent plumbings, we define a relation as follows: for two equivalent states \(s_{1}\) and \(s_{2}\), one state is said to be _simpler_ than the other if
\[f(s_{1})<f(s_{2}),\]
where \(f(s)\) for a state \(s\) is defined by
\[f(s):=5|V(s)|+\sum_{v\in V(s)}|w(v)|. \tag{4.1}\]
It is easy to check that this relation is well-defined in a set of all equivalent plumbings.
One might think that number of nodes in a state is enough to decide which state is simpler. The reason why we add the sum of the absolute values of the weights of nodes is to make the simplest state generically
Figure 4: Performance comparison between 3 models, GEN+GAT, GCN+GAT and GCN+GCN on various Test sets. The error bars are not displayed in the figure since the standard errors on Test set 1, 2, and 3 are too small (smaller than 0.7) to notice. The standard errors on Test set 4 are about 2.64, 6.25, and 6.20 for GEN+GAT, GCN+GAT and GCN+GCN models, respectively.
unique6. For example, two plumbings depicted in Figure 5 have same number of nodes and it is easy to check that they are equivalent by applying 2 Neumann moves. In this example, we say that the plumbing on the right-hand side is simpler than the other from (4.1).
Footnote 6: There still could be specific examples with different plumbings in the same equivalence class that minimize \(f(s)\). However, as the results below suggest, such cases are statistically insignificant.
By using this comparison relation, we set the terminal state to be a state equal to or simper than the initial state in the episode. We also terminate each episode after taking 15 time steps.
#### 4.1.2 Action space
An action for the agent in a state is defined to be a Neumann move applied to one of the nodes. There are 8 possible Neumann moves: 5 blow-up moves and 3 blow-down moves. However, blow-down moves are not always available for all nodes and this could raise a problem that there might be too many of such _illegal_ actions. Therefore, we incorporate 3 blow-down moves into one such that it takes an available blow-down if the corresponding node satisfies one of three following conditions:
* the degree of the node is 2 and its weight is equal to \(\pm 1\),
* the degree of the node is 1 and its weight is equal to \(\pm 1\),
* the degree of the node is 1 and its weight is equal to 0.
Then, for a given state, the total number of possible actions is equal to 6 (5 blow-up moves and 1 blow-down move) times number of nodes in the state. If the agent takes an illegal action, then the next state remains the same state as the current state and the agent will be punished with a negative reward, on which we will elaborate soon.
#### 4.1.3 Rewards
Since the goal for the RL agent is to find out the simplest representation for an initial state, it is natural to use \(-f(s^{\prime})\) as a reward (or punishment \(+f(s^{\prime})\)) for taking an action in the current state \(s\), where \(s^{\prime}\) denotes the next state obtained by taking an action to the current state \(s\). Since all the rewards are negative and simpler state is less punished, it helps the agent not only make the current representation as simple as possible, but also do this job as fast as possible. It is also important to note that some states must get a new blow-up node in order to be simplified, which means the agent has to sacrifice the immediate reward at some time steps to maximize the total return. As we have seen previously, there are some illegal actions in the action space for each state. The reward for such illegal actions is set to be equal to \(-2f(s^{\prime})\) for the next state \(s^{\prime}\), which remains the same as the current state \(s\) as we have discussed above.
We set the discount factor as \(\gamma=0.99\), very close to 1.
### The deep RL algorithm
We remind that the RL task is to obtain the simplest representation from a given initial state by using Neumann moves. To accomplish this task, we used Asynchronous Advantage Actor-Critic (A3C) [23] as an RL algorithm, which is the asynchronous version of Actor-Critic (AC) [24], with feedforward GNNs. A3C executes multiple local AC agents asynchronously in parallel to decorrelate the local agent's data into
Figure 5: Two plumbings are equivalent and they have same number of nodes. The right-hand side plumbing is simpler than the left one in the sense of (4.1).
a more stationary process. It also provides practical benefits of being able to use only multi-core CPU, not having to rely on specialized hardware such as GPUs.
The Actor network defines the policy function \(\pi(a|s)\), whose output shows the probability of taking action \(a\) in state \(s\), while the Critic network is to approximate the value function \(V^{\pi}(s)\), which represents the expected return from state \(s\). Since the inputs of the Actor and Critic are plumbing graphs, in the context of GNNs, the Actor network can be thought as the GNNs for node-level action-selection problem and the Critic is for graph-level estimation problem. The architecture of the Actor is designed by using two graph convolutional layers GCN+GCN and one single-layer feedforward neural network. The Critic has a similar structure, but it has an extra aggregation layer, for which we used a simple mean function. We have also tried GEN+GAT and GEN+GCN for the convolutional layers in the Actor and Critic networks. They seemed to perform well, but it takes a bit longer time for training than GCN+GCN. Since the results with GCN+GCN were already pretty good, we ended up using GCN+GCN.
We trained the agents for \(8\times 10^{4}\) episodes using 8 CPU cores and no GPU, which takes around 8 hours. We used Adam optimizer with learning rate \(5\times 10^{-4}\). For a comparison, we have also implemented Deep Q-Network (DQN) [25] with feedforward GNNs GCN+GCN with the same settings as those for A3C.
### Results
Our RL agents can be used to find the simplest representative in the equivalence class of a given plumbing graph. Furthermore, it also can be used to check whether a pair of plumbing graphs represents the same 3-manifold or not. For the latter purpose, we run the RL agents on a pair of plumbings to get the simplest representations for two plumbings, then we compare those to decide whether two equivalent plumbings are isomorphic or not. This process provides us with another advantage that, given two equivalent plumbings, we can get a sequence of Neumann moves that change one plumbing into the other, even though such sequence of Neumann moves is not necessarily the optimal one between two plumbings. From this perspective, we are going to check the performance of the RL agents by running them on pairs of plumbings that represent the same 3-manifolds.
For the initial inputs of the agents, we generate 10,000 random pairs of plumbings by EQUIVPAIR, Algorithm 3, but with a fixed number of Neumann moves \(N\in\{20,40,60,80,100\}\). At each time step, the agents choose a Neumann move and apply it to each plumbing in a pair, then we get another pair of plumbings as the next input for the agents. After taking each action, we compare two plumbings and check if they are isomorphic. If yes, we consider it as the success of finding out a sequence of Neumann moves connecting two plumbings in the initial pair. Otherwise, we move on to the next step and we repeat the process until the number of time steps exceeds \(5N\). We define the accuracy of the performance as the ratio the number of successes divided by the number of total episodes. An example of a pair of equivalent graphs with the successful result by the A3C trained agent is shown in Figure 9.
The results of the RL agents is presented in Figure 6. The plot on the left in Figure 6 shows the accuracy comparison between A3C and DQN. The accuracy for A3C tends to slightly decrease as \(N\) gets larger, but it's around 93% for all pairs of plumbings. However, the accuracy for DQN drops significantly from around 86% to 42% when \(N\) increases from \(N=20\) to \(N=100\).
On the right in Figure 6, we show the average number of actions that the agent takes until obtaining a pair of exactly same two plumbings from an initial pair of equivalent plumbings. For A3C agent, the
Figure 6: Performance comparison between A3C and DQN algorithms.
average numbers of actions do not exceed around 1.35 times \(N\), which means the trained A3C agent has a good efficiency to make a plumbing simpler. The DQN agent needs similar number of actions to the A3C for \(N=20\) and \(N=40\). However, it takes almost twice as many number of actions as A3C for larger \(N\).
We have also studied the distribution of Neumann moves (or actions) that the A3C agent performs before and after training to simplify plumbings generated with \(N=100\). In Figure 7, we plot the number of each Neumann move taken by the agent divided by \(N\). In the plot, moves 1-5 denote blow-up moves and moves 6-8 denote blow-down moves.
It is natural to observe that all blue dots in Figure 7 lay on the line \(y=0.125\), because the untrained agent takes each action equally often from a uniform distribution. On the other hand, red dots for trained agent show that the agent takes blow-down moves (moves 6-8) with a probability of around 75% and takes blow-up moves (moves 1-5) with the remaining probability. This makes sense from the fact that blow-down moves can actually make the plumbing simpler and get a less punishment than blow-up moves. Especially, we see that the move 7, blow-down move of type (b), is the most frequent action and the move 1, blow-up move of type (a), is the least frequent action. This is explained by the fact that the move 1 is not helpful for the agent to get a simpler plumbing.
Before we jump into the conclusion, it is interesting to check whether or not the trained A3C agent is indeed maximizing the total return instead of immediate rewards by a simple example depicted in 8. The left plumbing in Figure 8 is a standard representation that realizes a 3-manifold known as a Brieskorn 3-sphere \(\overline{\Sigma(2,3,5)}\), while the plumbing on the right represents a homeomorphic 3-manifold which can also be considered as the boundary of the \(E_{8}\) manifold. As one can see immediately, the plumbing on the right in Figure 8 does not have nodes available for blow-down moves. Therefore, in order to get the left plumbing from the right one, the RL agent should take appropriate blow-up moves first, then taking available blow-down moves. This is why we take this example for the test. We notice that 6 actions are needed to turn one plumbing into the other in an optimal way.
The trained A3C agent successfully simplify the \(E_{8}\) plumbing to the plumbing \(\overline{\Sigma(2,3,5)}\) by taking 16 actions, while the trained DQN does not find a solution until the number of actions exceeds 50. This test ensures that the A3C agent indeed pursues not short-term rewards, but its maximal long-term return.
Figure 7: Comparison of the number of Neumann moves taken by a trained A3C agent and an untrained A3C agent to simplify plumbing. The values shown are the total number of Neumann moves of a given type divided by the total number of actions performed, aggregated over multiple examples.
## 5 Conclusion and Future Work
### Conclusion
In this paper we have examined the GNN approach to the problems in 3-dimensional topology, which ask whether two given plumbing graphs represent a same 3-manifold or not, and whether or not it is possible to find out the sequence of Neumann moves that connects two plumbings if they are equivalent.
In Section 3, we used supervised learning to solve the binary classification of whether or not a pair of plumbings is equivalent. We built 3 models by combining graph convolution operators GEN, GCN and GAT, together with a certain graph aggregation module and an MLP as a classifier. We found that GEN + GAT model outperformed GCN + GCN and GCN + GAT models on randomly generated training datasets with maximal number \(N_{\max}=40\) of applied Neumann moves. GEN + GAT achieved about 95% accuracy while accuracy for the others is below 80%. We also tested those 3 models on randomly generated testsets with larger \(N_{\max}=60\) and \(N_{\max}=80\). Even though those models were trained by a training sets with \(N_{\max}=40\), it is an interesting point that, on such testsets, they still performed on a similar level to their training performance.
In Section 4, we utilized reinforcement learning to find out the sequence of Neumann moves that relates to a given pair of equivalent plumbings. We trained the agent such that it could find the simplest representation of a plumbing by using Neumann moves as its actions. We define the simplicity as a certain linear combination of number of nodes and sum of the absolute value of node features. We ran the trained agent on each of two equivalent plumbings until it arrived at two isomorphic plumbings. In this way, we can construct a sequence of Neumann moves connecting two equivalent plumbings. Using A3C algorithm, we see that the agent can find a sequence of Neumann moves in over 90% of randomly generated equivalent plumbing pairs even with \(N_{max}=100\). This outperforms the DQN agent by a factor of around 1.5 when \(N=60\), and by more than a factor of 2 when \(N=100\).
### Future work
In this paper we have used Geometric Deep Learning, GNN in particular, in the problem of classification of 3-manifolds up to homeomorphisms. We restricted to a special simple class of 3-manifolds corresponding to tree plumbing graphs. We hope to apply similar neural network models for more general 3-manifolds and also 4-manifolds in the future. One direct generalization would be considering 3-manifolds corresponding to general plumbing graphs described in [14], possibly disconnected, with loops, and with non-trivial genera assigned to the vertices7. This, in particular, would involve considering extra features associated to the vertices and also to the edges of graphs, as well as additional set of moves relating equivalent graphs. A more interesting generalization would be considering general Kirby diagrams for 3-manifolds. A Kirby diagram of a 3-manifold is a planar diagram of a link with an integer framing number assigned to each link component. The 3-manifold corresponding to the diagram is then obtained by performing Dehn surgery on this framed link. Two diagrams produce homeomorphic 3-manifolds if and only if they can be related by a sequence of Reidemeister moves (that do not change the isotopy class of the link) together with the so-called Kirby, or equivalently, Fenn-Rourke moves that do change the link but not the
Figure 8: Two equivalent representations of a plumbed 3-manifold \(\overline{\Sigma(2,3,5)}\).
resulting 3-manifold (up to homeomorphism). Such a diagram can be understood as a 4-regular plane graph with additional data specifying the types of crossings in the link diagram and the framings of the link components. Alternatively, one can consider Tait graph associated to a checkboard coloring of the link diagram. For practical purposes, this presentation most likely will be more efficient. The Reidemeister, as well as Kirby/Fenn-Rourke moves then can be understood again as certain local operations on graphs associated with Kirby diagrams. The main new challenge would be incorporating the structure of the planar embedding of the graph in GNN. This can be done, for example, by specifying the cyclic order of edges at each vertex, or cyclic order of edges for each face of the plane graph. This additional structure should be taken into account in the layers of the network. This is not considered in most standard GNN architectures. A further step would be the problem of recognizing whether a pair of Kirby diagrams for 4-manifolds produces a diffeomorphic pair. Such Kirby diagrams are again framed link diagrams that also contain special "dotted" link components. There is a corresponding set of local Kirby moves that relate diagrams realizing diffeomorphic 4-manifold. For a comprehensive reference about the Kirby diagrams of 3- and 4-manifold we refer to [26].
## Acknowledgements
We would like to thank Sergei Gukov for the useful comments and suggestions on the draft of the paper. We would also like to thank the anonymous referees who provided insightful and detailed comments and suggestions on a earlier version of the paper.
## Appendix A Algorithms
In this section, we provide details of the algorithms which have been used to generate datasets for training and testing both SL and RL models in Section 3 and Section 4.
* RandomPlumbing This algorithm generates a random plumbing tree by creating a random array for node features and building an adjacency matrix. It starts to choose a random integer as a number of nodes between 1 and 25. In general, there are \(N^{N-2}\) different plumbing trees with \(N\) nodes if we don't consider node features. Therefore, the upper limit 25 is large enough to generate around \(10^{6}\) random plumbing tess with statistically insignificant overlapping plumbings. The array of node feature is also created by randomly choosing an integer in the interval \((-20,20)\) for each node. Then we define the adjacency matrix for the plumbing tree, and the algorithm returns a pair of node feature array and adjacency matrix as data for the output plumbing. Note that all random process is done by using a uniform distribution.
* RandomNeumannMove The role of this algorithm is to apply a randomly chosen Neumann move to a random node of the input plumbing, then returns the resulting plumbing. A random Neumann move is characterized by 3 variables, i.e., \(type\), \(updown\), and \(sign\). Here \(type\in\{1,2,3\}\) denotes 3 types of Neumann moves depicted in Figure 2, \(updown\in\{1,-1\}\) points out blow-up (\(updown=1\)) or blow-down (\(updown=-1\)), and \(sign\in\{1,-1\}\) denotes the sign of the new vertex for blow-up Neumann moves of type (b) and (c). Notice that other moves does not require \(sign\). The algorithm first takes a random node of the input and fixes a random tuple \((type,updown,sign)\) from a uniform distribution. Then it builds new node feature array and adjacency matrix for the plumbing obtained by applying the Neumann move to the chosen node. If the Neumann move determined by a tuple \((type,updown,sign)\) is an illegal move, the output plumbing is the same as the input. The algorithm also returns another variable \(done\in\{\textsc{True},\textsc{False}\}\), which makes it possible to notice whether the Neumann move to be applied is legal (\(done=\textsc{True}\)) or illegal (\(done=\textsc{False}\)). This variable \(done\) will be used to decide the rewards of actions in Section 4.
* EquivPair and InequPair These are used to generate an equivalent plumbing pair (EquivPair) or an inequivalent plumbing pair (InequivPair). At the first step, EquivPair generates an initial pair of isomorphic plumbings, while InequPair generates two inequivalent plumbings, by using RandomPlumbing. Then they
have the same process, in which they apply Neumann moves iteratively up to \(N_{\max}\) times to each plumbing in the initial pair. Then they return the resulting pair as well as a variable, named \(label\), which will be used for classification problem in Section 3. Notice that \(label=1\) for EquivPair and \(label=-1\) for InequPair.
* TweakPair This algorithm generates an inequivalent pair of plumbings, but with the same graph structure. One plumbing is generated by RandomPlumbing, and the other is obtained by tweaking a copy of the first plumbing, i.e., by making a small change to a feature of a randomly chosen node. These two plumbings form an initial pair. Since the adjacency matrices of two plumbings are same, they have the same graph structure. However, due to the small change, two plumbings are inequivalent. Then the algorithm has the same structure as in EquivPair and InequPair to apply random Neumann moves iteratively to each plumbing in the initial pair.
```
\(n\leftarrow\) random integer between \(1\) and \(25\)\(\triangleright\) number of nodes \(\mathbf{x}\leftarrow\) array of \(n\) random integers between \(-20\) and \(20\)\(\triangleright\) node features \(\mathbf{a}\gets n\times n\) matrix of zeros \(\triangleright\) initialize the adjacency matrix for\(i=2\) to \(n\)do\(\triangleright\) construct the adjacency matrix \(j\leftarrow\) random integer between \(1\) and \(i-1\) \(\mathbf{a}_{i,j},\mathbf{a}_{j,i}\gets 1\) endfor \(G\leftarrow(\mathbf{x},\mathbf{a})\)\(\triangleright\)\(G\) defines the plumbing return\(G\)
```
**Algorithm 1**RandomPlumbing
```
\(\mathbf{Require:}\) a plumbing \(G\)\(v\leftarrow\) a random node of \(G\)\(type\leftarrow\) a random choice in \(\{1,2,3\}\)\(updown\leftarrow\) a random choice in \(\{1,-1\}\) if\(updown=1\)then\(\triangleright\) blow-up move if\(type=1\)then\(G^{\prime}\leftarrow\) a plumbing applied a blow-up move of type (a) to the node \(v\) else\(sign\leftarrow\) a random choice in \(\{1,-1\}\)\(G^{\prime}\leftarrow\) a plumbing applied a blow-up move determined by \((type,sign)\) endif \(done\leftarrow\) True else\(\triangleright\) blow-down move if\(v\) can be removed by a blow-down move then\(G^{\prime}\leftarrow\) a plumbing applied a blow-down move to the node \(v\) \(done\leftarrow\) True else\(G^{\prime}\gets G\)\(\triangleright\) returns the input plumbing for a forbidden move \(done\leftarrow\) False endif endif endif return\((done,G^{\prime})\)
```
**Algorithm 2**RandomNeumannMove
```
\(\mathbf{x}\leftarrow\) array of \(n\) random integers between \(-20\) and \(20\)\(\triangleright\) initialize the adjacency matrix for\(i=2\) to \(n\)do\(\triangleright\) construct the adjacency matrix \(j\leftarrow\) random integer between \(1\) and \(i-1\)\(\mathbf{a}_{i,j},\mathbf{a}_{j,i}\gets 1\) endfor \(G\leftarrow(\mathbf{x},\mathbf{a})\)return\(G\)
```
**Algorithm 3**RandomPlumbing
```
\(\mathbf{Require:}\) a plumbing \(G\)\(v\leftarrow\) a random node of \(G\)\(type\leftarrow\) a random choice in \(\{1,2,3\}\)\(updown\leftarrow\) a random choice in \(\{1,-1\}\) if\(updown=1\)then\(\triangleright\) blow-up move if\(type=1\)then\(G^{\prime}\leftarrow\) a plumbing applied a blow-up move of type (a) to the node \(v\) else\(sign\leftarrow\) a random choice in \(\{1,-1\}\)\(G^{\prime}\leftarrow\) a plumbing applied a blow-up move determined by \((type,sign)\) endif \(done\leftarrow\) True else\(\triangleright\) blow-down move if\(v\) can be removed by a blow-down move then\(G^{\prime}\leftarrow\) a plumbing applied a blow-down move to the node \(v\) \(done\leftarrow\) True else\(G^{\prime}\gets G\)\(\triangleright\) returns the input plumbing for a forbidden move \(done\leftarrow\) False endif endif endif return\((done,G^{\prime})\)
```
**Algorithm 4**RandomPlumbing
```
\(\mathbf{x}\leftarrow\) array of \(n\) random integers between \(-20\) and \(20\)\(\triangleright\) initialize the adjacency matrix for\(i=2\) to \(n\)do\(\triangleright\) construct the adjacency matrix \(j\leftarrow\) random integer between \(1\) and \(i-1\)\(\mathbf{a}_{i,j},\mathbf{a}_{j,i}\gets 1\) endfor \(G\leftarrow(\mathbf{x},\mathbf{a})\)return\(G\)
```
**Algorithm 5**RandomPlumbing
```
\(\mathbf{Require:}\) a plumbing \(G\)\(v\leftarrow\) a random node of \(G\)\(type\leftarrow\) a random choice in \(\{1,2,3\}\)\(updown\leftarrow\) a random choice in \(\{1,-1\}\)\(updown=1\)then\(\triangleright\) blow-up move if\(type=1\)then\(G^{\prime}\leftarrow\) a plumbing applied a blow-up move of type (a) to the node \(v\) else\(sign\leftarrow\) a random choice in \(\{1,-1\}\)\(G^{\prime}\leftarrow\) a plumbing applied a blow-up move determined by \((type,sign)\) endif \(done\leftarrow\) True else\(\triangleright\) blow-down move if\(v\) can be removed by a blow-down move then\(G^{\prime}\leftarrow\) a plumbing applied a blow-down move to the node \(v\) \(done\leftarrow\) True else\(G^{\prime}\gets G\)\(\triangleright\) returns the input plumbing for a forbidden move \(done\leftarrow\) False endif endif endif return\((done,G^{\prime})\)
```
**Algorithm 5**RandomPlumbing
```
\(\mathbf{x}\leftarrow\) array of \(n\) random integers between \(-20\) and \(20\)\(\triangleright\) initialize the adjacency matrix for\(i=2\) to \(n\)do\(\triangleright\) construct the adjacency matrix for\(i=2\) to \(n\)do\(\triangleright\) construct the adjacency matrix \(j\leftarrow\) random integer between \(1\) and \(i-1\)\(\mathbf{a}_{i,j},\mathbf{a}_{j,i}\gets 1\) endfor \(G\leftarrow(\mathbf{x},\mathbf{a})\)return\(G\)
```
**Algorithm 6**RandomPlumbing
```
\(\mathbf{Require:}\) a plumbing \(G\)\(v\leftarrow\) a random node of \(G\)\(type\leftarrow\) a random choice in \(\{1,2,3\}\)\(updown\leftarrow\) a random choice in \(\{1,-1\}\)\(updown=1\)then\(\triangleright\) blow-up move if\(type=1\)then\(G^{\prime}\leftarrow\) a plumbing applied a blow-up move of type (a) to the node \(v\) else\(sign\leftarrow\) a random choice in \(\{1,-1\}\)\(G^{\prime}\leftarrow\) a plumbing applied a blow-up move determined by \((type,sign)\) endif \(done\leftarrow\) True else\(\triangleright\) blow-down move if\(v\) can be removed by a blow-down move then\(G^{\prime}\leftarrow\) a plumbing applied a blow-down move to the node \(v\) \(done\leftarrow\) True else\(G^{\prime}\gets G\)\(\triangleright\) returns the input plumbing for a forbidden move \(done\leftarrow\) False endif endif endif return\((done,G^{\prime})\)
```
**Algorithm 6**RandomPlumbing
```
\(\mathbf{x}\leftarrow\) array of \(n\) random integers between \(-20\) and \(20\)\(\triangleright\) initialize the adjacency matrix for\(i=2\) to \(n\)do\(\triangleright\) construct the adjacency matrix for\(i=2\) to \(n\)do\(\triangleright\) construct the adjacency matrix for\(j=\) random integer between \(1\) and \(i-1\)\(\mathbf{a}_{i,j},\mathbf{a}_{j,i}\gets 1\) endfor \(G\leftarrow(\mathbf{x},\mathbf{a})\)return\(G\)
```
**Algorithm 7**RandomPlumbing
``` \(\mathbf{Require:}\) a plumbing \(G\)\(v\leftarrow\) a random node of \(G\)\(type\leftarrow\) a random choice in \(\{1,2,3\}\)\(updown\leftarrow\) a random choice in \(\{1,-1\}\)\(updown=1\)\(updown=1\)\(updown=1\)\(updown=1\)\(updown=1\)\(updown=1\)\(updown=1\)\(updown=1\)\(updown=1\)\(updown=1\)\(updown=1\)\(updown=1\)\(updown=1\)\(updown=1\)\(updown=1\)\(updown=1\)\(updown=1\)\(down=down=1\)\(updown=1\)\(down=1\)\(down=down=1\)\(down=1\)\(down=down=1\)\(down=down=1\)\(down=down=1\)\(down=down=1\)\(down=down=1\)\(down=down=1\)\(down=down=1\)\(down=down=down=1\)\(down=down=1\)\(down=down=down=1\)\(down=down=1\)\(down=down=down=1\)\(down=down=down=1\)\(down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=1\)\(down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=1\)\(down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=down=1\)\(down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=down=1\)\(down=down=down=
```
Require:\(N_{\max}\in\mathbb{Z}^{+}\) \(G\leftarrow\) a plumbing by RandomPlumbing \(G_{1}\gets G\) \(n_{1}\leftarrow\) a random integer between 1 and \(N_{\max}\) for\(i=1\) to \(n_{1}\)do\(G_{1}\leftarrow\) RandomNeumannMove\((G_{1})\) endfor \(G_{2}\gets G\) \(n_{2}\leftarrow\) a random integer between 1 and \(N_{\max}\) for\(j=1\) to \(n_{2}\)do\(G_{2}\leftarrow\) RandomNeumannMove\((G_{2})\) endfor \(label\gets 1\) return\(G_{1},G_{2},label\)
```
**Algorithm 3** EQUIVPAIR
```
Require:\(N_{\max}\in\mathbb{Z}^{+}\) \(G_{1}\leftarrow\) a plumbing by RandomPlumbing \(G_{2}\leftarrow G_{1}\) \(v\leftarrow\) a random node in \(G_{2}\) \(t\leftarrow\) a random integer between -3 and 3, not 0. \(\mathbf{x}\leftarrow\) node feature of \(G_{2}\) \(\mathbf{a}\leftarrow\) adjacency matrix of \(G_{2}\) \(\mathbf{x}_{v}\leftarrow\mathbf{x}_{v}+t\) \(G_{2}\leftarrow\) a plumbing with \((\mathbf{x},\mathbf{a})\) \(n_{1}\leftarrow\) a random integer between 1 and \(N_{\max}\) for\(i=1\) to \(n_{1}\)do\(G_{1}\leftarrow\) RandomNeumannMove\((G_{1})\) endfor \(n_{2}\leftarrow\) a random integer between 1 and \(N_{\max}\) for\(i=1\) to \(n_{2}\)do\(G_{2}\leftarrow\) RandomNeumannMove\((G_{2})\) endfor \(label\gets-1\) return\(G_{1},G_{2},label\)
```
**Algorithm 4** INEQUIVPAIR
```
Require:\(N_{\max}\in\mathbb{Z}^{+}\) \(G_{1}\leftarrow\) a plumbing by RandomPlumbing \(G_{2}\leftarrow G_{1}\) \(v\leftarrow\) a random node in \(G_{2}\) \(t\leftarrow\) a random integer between -3 and 3, not 0. \(\mathbf{x}\leftarrow\) node feature of \(G_{2}\) \(\mathbf{a}\leftarrow\) adjacency matrix of \(G_{2}\) \(\mathbf{x}_{v}\leftarrow\mathbf{x}_{v}+t\) \(G_{2}\leftarrow\) a plumbing with \((\mathbf{x},\mathbf{a})\) \(n_{1}\leftarrow\) a random integer between 1 and \(N_{\max}\) for\(i=1\) to \(n_{1}\)do\(G_{1}\leftarrow\) RandomNeumannMove\((G_{1})\) endfor \(n_{2}\leftarrow\) a random integer between 1 and \(N_{\max}\) for\(i=1\) to \(n_{2}\)do\(G_{2}\leftarrow\) RandomNeumannMove\((G_{2})\) endfor \(label\gets-1\) return\(G_{1},G_{2},label\)
```
**Algorithm 5** TweakPair
Figure 9: An example of a pair of equivalent plumbing graphs generated by EQUIVPAIR with the number of Neumann moves fixed to \(N=40\). The graphs are successfully recognized as equivalent both by the RL agent trained by A3C algorithm considered in Section 4 and the GEN+GAT neural network considered in Section 3. |
2302.02076 | AONN: An adjoint-oriented neural network method for all-at-once
solutions of parametric optimal control problems | Parametric optimal control problems governed by partial differential
equations (PDEs) are widely found in scientific and engineering applications.
Traditional grid-based numerical methods for such problems generally require
repeated solutions of PDEs with different parameter settings, which is
computationally prohibitive especially for problems with high-dimensional
parameter spaces. Although recently proposed neural network methods make it
possible to obtain the optimal solutions simultaneously for different
parameters, challenges still remain when dealing with problems with complex
constraints. In this paper, we propose AONN, an adjoint-oriented neural network
method, to overcome the limitations of existing approaches in solving
parametric optimal control problems. In AONN, the neural networks are served as
parametric surrogate models for the control, adjoint and state functions to get
the optimal solutions all at once. In order to reduce the training difficulty
and handle complex constraints, we introduce an iterative training framework
inspired by the classical direct-adjoint looping (DAL) method so that penalty
terms arising from the Karush-Kuhn-Tucker (KKT) system can be avoided. Once the
training is done, parameter-specific optimal solutions can be quickly computed
through the forward propagation of the neural networks, which may be further
used for analyzing the parametric properties of the optimal solutions. The
validity and efficiency of AONN is demonstrated through a series of numerical
experiments with problems involving various types of parameters. | Pengfei Yin, Guangqiang Xiao, Kejun Tang, Chao Yang | 2023-02-04T03:20:28Z | http://arxiv.org/abs/2302.02076v1 | AONN: an adjoint-oriented neural network method for all-at-once solutions of parametric optimal control problems+
###### Abstract
Parametric optimal control problems governed by partial differential equations (PDEs) are widely found in scientific and engineering applications. Traditional grid-based numerical methods for such problems generally require repeated solutions of PDEs with different parameter settings, which is computationally prohibitive especially for problems with high-dimensional parameter spaces. Although recently proposed neural network methods make it possible to obtain the optimal solutions simultaneously for different parameters, challenges still remain when dealing with problems with complex constraints. In this paper, we propose AONN, an adjoint-oriented neural network method, to overcome the limitations of existing approaches in solving parametric optimal control problems. In AONN, the neural networks are served as parametric surrogate models for the control, adjoint and state functions to get the optimal solutions all at once. In order to reduce the training difficulty and handle complex constraints, we introduce an iterative training framework inspired by the classical direct-adjoint looping (DAL) method so that penalty terms arising from the Karush-Kuhn-Tucker (KKT) system can be avoided. Once the training is done, parameter-specific optimal solutions can be quickly computed through the forward propagation of the neural networks, which may be further used for analyzing the parametric properties of the optimal solutions. The validity and efficiency of AONN is demonstrated through a series of numerical experiments with problems involving various types of parameters.
p -- 49M41, 49M05, 65N21
## 1 Introduction
Optimal control modeling has been playing an important role in a wide range of applications, such as aeronautics [56], mechanical engineering [52], haemodynamics [43], microelectronics [39], reservoir simulations [57], and environmental sciences [48]. Particularly, to solve a PDE-constrained optimal control problem, one needs to find an optimal control function that can minimize a given cost functional for systems governed by partial differential equations (PDEs). Popular approaches for solving PDE-constrained optimal control problems include the direct-adjoint looping (DAL) method [32, 21] that iteratively solves the adjoint systems, the Newton conjugate gradient method [47] that exploits the Hessian information, the semismooth Newton method [58, 10] that includes control and state constraints, and the alternating direction method of multipliers [8] designed for convex optimization. In practice, the cost functionals and PDE systems often entail different configurations of physical or geometrical parameters, leading to parametric optimal control modeling. These parameters usually arise from certain desired profiles such as material properties, boundary conditions, control constraints, and computational domains [22, 23, 48, 35, 46, 39, 31].
Most of the aforementioned methods cannot be directly applied to parametric optimal control problems. The main reason is that, in addition to the already costly process of solving the PDEs involved in the optimal control modeling, the presence of parameters introduces extra prominent complexity, making the parametric optimal control problems much more challenging than the nonparametric ones [18]. An efficient method for solving parametric optimal control problems is the reduced order model (ROM) [40, 43, 48, 36], which relies on surrogate models for the parametric model order reduction, and can provide both efficient and stable approximations if the solutions lie on a low-dimensional subspace [18]. However, because of the coupling of the spatial domain and the parametric domain, the discretization in ROM still suffers from the curse of dimensionality, thus is unable to obtain all-at-once solutions to parametric optimal control problems [22, 43, 40].
Numerical methods based on deep learning have been receiving increasingly more attentions in solving PDEs [41, 42, 12, 16, 59, 44, 45]. Recently, several successes have been made in solving PDE-constrained optimal control problems with deep-learning-based approaches. For example, a physics-informed neural network (PINN) method is designed to solve optimal control problems by adding the cost functional to the standard PINN loss [34, 29]. Meanwhile, deep-learning-based surrogate models [56, 30] and operator learning methods [55, 20] are proposed to achieve fast inference for the optimal control solution without intensive computations. Although these methods are successful for solving optimal control problems, few of which can be directly applied in parametric optimal control modeling. In a recent work [11], an extended PINN is proposed to augment neural network inputs with parameters, so that the Karush-Kuhn-Tucker (KKT) conditions and neural networks can be combined. In this
way, the optimal solution with a continuous range of parameters could be obtained for parametric optimal control problems with simple constraints. However, it is difficult for this method to generalize to solve more complex parametric optimal control problems, especially when the control function has additional inequality constraints [2, 3]. In such scenarios, too many penalty terms have to be introduced into the loss function to fit the complex KKT system, which is very hard to optimize [26]. A more detailed discussion of aforementioned methods can be found in Section 4.
To tackle the challenges in solving parametric optimal control problems and avoid the curse of dimensionality, we propose AONN, an adjoint-oriented neural network method that combines the advantages of both the classic DAL method and the deep learning technique. In AONN, we construct three neural networks with augmented parameter inputs, and integrate them into the framework of the DAL method to get an all-at-once approximation of the control function, the adjoint function, and the state function, respectively. On the one hand, neural networks enable the classic DAL framework to solve parametric problems simultaneously with the aid of random sampling rather than the discretization of the coupled spatial domain and parametric domain. On the other hand, unlike the PINN-based penalty methods [42, 29, 34, 11], the introduction of DAL avoids directly solving the complex KKT system with various penalty terms. Numerical results will show that, AONN can obtain high precision solutions to a series of parametric optimal control problems.
The remainder of the paper is organized as follows. In Section 2, the problem setting is introduced. After that, we will present the AONN framework in Section 3. Some further comparisons between AONN and several recently proposed methods are discussed in Section 4. Then, numerical results are presented in Section 5 to demonstrate the efficiency of the proposed AONN method. The paper is concluded in Section 6.
## 2 Problem setup
Let \(\boldsymbol{\mu}\in\mathcal{P}\subset\mathbb{R}^{D}\) denote a vector that collects a finite number of parameters. Let \(\Omega(\boldsymbol{\mu})\subset\mathbb{R}^{d}\) be a spatial domain depending on \(\boldsymbol{\mu}\), that is bounded, connected and with boundary \(\partial\Omega(\boldsymbol{\mu})\), and \(\mathbf{x}\in\Omega(\boldsymbol{\mu})\) denote a spatial variable. Consider the following parametric optimal control problem
\[\mathrm{OCP}(\boldsymbol{\mu}):\quad\left\{\begin{aligned} &\min_{(y(\mathbf{x}, \boldsymbol{\mu}),u(\mathbf{x},\boldsymbol{\mu}))\in Y\times U}J(y(\mathbf{x },\boldsymbol{\mu}),u(\mathbf{x},\boldsymbol{\mu});\boldsymbol{\mu}),\\ &\mathrm{s.t.}\ \ \mathbf{F}(y(\mathbf{x},\boldsymbol{\mu}),u( \mathbf{x},\boldsymbol{\mu});\boldsymbol{\mu})=0\ \ \mathrm{in}\ \Omega(\boldsymbol{\mu}),\ \mathrm{and}\ \ u(\mathbf{x},\boldsymbol{\mu})\in U_{ad}( \boldsymbol{\mu}),\end{aligned}\right. \tag{1}\]
where \(J:Y\times U\times\mathcal{P}\mapsto\mathbb{R}\) is a parameter-dependent objective functional, \(Y\) and \(U\) are two proper function spaces defined on \(\Omega(\boldsymbol{\mu})\), with \(y\in Y\) being the state function and \(u\in U\) the control function, respectively. Both \(y\) and \(u\) are dependent on \(\mathbf{x}\) and \(\boldsymbol{\mu}\). To simplify the notation, we denote \(y(\boldsymbol{\mu})=y(\mathbf{x},\boldsymbol{\mu})\) and \(u(\boldsymbol{\mu})=u(\mathbf{x},\boldsymbol{\mu})\). In \(\mathrm{OCP}(\boldsymbol{\mu})\) (1), \(\mathbf{F}\) represents the governing equation, such as, in our case, parameter-dependent PDEs, including the partial differential operator \(\mathbf{F}_{I}\) and the boundary operator \(\mathbf{F}_{B}\) (see Section 5 for examples). The admissible set \(U_{ad}(\boldsymbol{\mu})\) is a parameter-dependent bounded closed convex subset of \(U\), which provides an additional inequality constraint for \(u\), e.g., the box constraint \(U_{ad}(\boldsymbol{\mu})=\{u(\boldsymbol{\mu})\in U:u_{a}(\boldsymbol{\mu}) \leq u(\boldsymbol{\mu})\leq u_{b}(\boldsymbol{\mu})\}\).
Since the \(\mathrm{OCP}(\boldsymbol{\mu})\) (1) is a constrained minimization problem, the necessary condition for the minimizer \((y^{*}(\boldsymbol{\mu}),u^{*}(\boldsymbol{\mu}))\) of (1) is the following KKT system [52, 9, 19]:
\[\begin{cases}J_{y}(y^{*}(\boldsymbol{\mu}),u^{*}(\boldsymbol{\mu}); \boldsymbol{\mu})-\mathbf{F}_{y}^{*}(y^{*}(\boldsymbol{\mu}),u^{*}(\boldsymbol {\mu});\boldsymbol{\mu})p^{*}(\boldsymbol{\mu})=0,\\ \mathbf{F}(y^{*}(\boldsymbol{\mu}),u^{*}(\boldsymbol{\mu});\boldsymbol{\mu})=0,\\ (\mathrm{d}_{u}J(y^{*}(\boldsymbol{\mu}),u^{*}(\boldsymbol{\mu});\boldsymbol{ \mu}),v(\boldsymbol{\mu})-u^{*}(\boldsymbol{\mu}))\geq 0,\ \forall v(\boldsymbol{\mu})\in U_{ad}( \boldsymbol{\mu}),\end{cases} \tag{2}\]
where \(p^{*}(\boldsymbol{\mu})\) is the adjoint function which is also known as the Lagrange multiplier, and \(\mathbf{F}_{y}^{*}(y(\boldsymbol{\mu}),u(\boldsymbol{\mu});\boldsymbol{\mu})\) denotes the adjoint operator of \(\mathbf{F}_{y}(y(\boldsymbol{\mu}),u(\boldsymbol{\mu});\boldsymbol{\mu})\). As \(y(\boldsymbol{\mu})\) can always be uniquely determined by \(u(\boldsymbol{\mu})\) through the state equation \(\mathbf{F}\), the total derivative of \(J\) with respect to \(u\) in (2) can be formulated as
\[\mathrm{d}_{u}J(y^{*}(\boldsymbol{\mu}),u^{*}(\boldsymbol{\mu}); \boldsymbol{\mu})=J_{u}(y^{*}(\boldsymbol{\mu}),u^{*}(\boldsymbol{\mu}); \boldsymbol{\mu})-\mathbf{F}_{u}^{*}(y^{*}(\boldsymbol{\mu}),u^{*}(\boldsymbol {\mu});\boldsymbol{\mu})p^{*}(\boldsymbol{\mu}). \tag{3}\]
The solution of \(\mathrm{OCP}(\boldsymbol{\mu})\) satisfies the system (2). So the key point is to solve this KKT system, based on which it is expected to find a minimizer for the \(\mathrm{OCP}(\boldsymbol{\mu})\). In general, it is not a trivial task to solve (2) directly, and solving the parametric PDE involved in the KKT system poses additional computational challenges (e.g. the discretization of parametric spaces). In this work, we focus on the deep learning method to solve (2). More specifically, we use three deep neural networks to approximate \(y^{*}(\boldsymbol{\mu}),u^{*}(\boldsymbol{\mu})\) and \(p^{*}(\boldsymbol{\mu})\) separately with an efficient training algorithm.
## 3 Methodology
Let \(\hat{y}\left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{y}\right), \hat{u}\left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{u}\right)\), and \(\hat{p}\left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{p}\right)\) be three independent deep neural networks parameterized with \(\boldsymbol{\theta}_{y},\boldsymbol{\theta}_{u}\) and \(\boldsymbol{\theta}_{p}\) respectively. Here, \(\mathbf{x}(\boldsymbol{\mu})\) is the augmented input of neural networks,
which is given by
\[\mathbf{x}(\boldsymbol{\mu})=\left[\begin{array}{cccccccc}x_{1},&\ldots,&x_{d},& \mu_{1},&\ldots,&\mu_{D}\end{array}\right].\]
We then use \(\hat{y}\left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{y}\right),\hat{u} \left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{u}\right)\), and \(\hat{p}\left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{p}\right)\) to approximate \(y^{*}(\boldsymbol{\mu}),u^{*}(\boldsymbol{\mu})\) and \(p^{*}(\boldsymbol{\mu})\) through minimizing three loss functions defined as
\[\mathcal{L}_{s}(\boldsymbol{\theta}_{y},\boldsymbol{\theta}_{u}) =\left(\frac{1}{N}\sum_{i=1}^{N}\left|r_{s}\left(\hat{y}(\mathbf{ x}(\boldsymbol{\mu})_{i};\boldsymbol{\theta}_{y}),\hat{u}(\mathbf{x}( \boldsymbol{\mu})_{i};\boldsymbol{\theta}_{u});\boldsymbol{\mu}_{i}\right) \right|^{2}\right)^{\frac{1}{2}}, \tag{1a}\] \[\mathcal{L}_{a}(\boldsymbol{\theta}_{y},\boldsymbol{\theta}_{u}, \boldsymbol{\theta}_{p}) =\left(\frac{1}{N}\sum_{i=1}^{N}\left|r_{a}\left(\hat{y}( \mathbf{x}(\boldsymbol{\mu})_{i};\boldsymbol{\theta}_{y}),\hat{u}(\mathbf{x}( \boldsymbol{\mu})_{i};\boldsymbol{\theta}_{u}),\hat{p}(\mathbf{x}(\boldsymbol {\mu})_{i};\boldsymbol{\theta}_{p});\boldsymbol{\mu}_{i}\right)\right|^{2} \right)^{\frac{1}{2}},\] (1b) \[\mathcal{L}_{u}(\boldsymbol{\theta}_{u},u_{\mathsf{step}}) =\left(\frac{1}{N}\sum_{i=1}^{N}\left|\hat{u}(\mathbf{x}( \boldsymbol{\mu})_{i};\boldsymbol{\theta}_{u})-u_{\mathsf{step}}(\mathbf{x}( \boldsymbol{\mu})_{i})\right|^{2}\right)^{\frac{1}{2}}, \tag{1c}\]
where \(\{\mathbf{x}(\boldsymbol{\mu})_{i}\}_{i=1}^{N}\) denote the collocation points. The functionals \(r_{s}\) and \(r_{a}\) represent the residuals for the state equation and the adjoint equation induced by the KKT conditions, i.e.,
\[r_{s}(y(\boldsymbol{\mu}),u(\boldsymbol{\mu});\boldsymbol{\mu}) \triangleq\mathbf{F}(y(\boldsymbol{\mu}),u(\boldsymbol{\mu}); \boldsymbol{\mu}), \tag{2a}\] \[r_{a}(y(\boldsymbol{\mu}),u(\boldsymbol{\mu}),p(\boldsymbol{\mu} );\boldsymbol{\mu}) \triangleq J_{y}(y(\boldsymbol{\mu}),u(\boldsymbol{\mu}); \boldsymbol{\mu})-\mathbf{F}_{y}^{*}(y(\boldsymbol{\mu}),u(\boldsymbol{\mu}); \boldsymbol{\mu})p(\boldsymbol{\mu}), \tag{2b}\]
and \(u_{\mathsf{step}}(\mathbf{x}(\boldsymbol{\mu}))\) is an intermediate variable during the update procedure of the control function for the third variational inequality in the KKT conditions (2), which will be discussed in Section 3.2. These three loss functions try to fit the KKT conditions by adjusting the parameters of the three neural networks \(\hat{y}\left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{y}\right), \hat{u}\left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{u}\right)\), and \(\hat{p}\left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{p}\right)\), and the training procedure is performed in a sequential way. The derivatives involved in the loss functions can be computed efficiently by automatic differentiation in deep learning libraries such as TensorFlow [1] or PyTorch [38].
### Deep learning for parametric PDEs
The efficient solution of parametric PDEs is crucial for parametric optimal control modeling because of extra parameters involved in the physical system (1). To deal with parametric PDEs, we augment the input space of the neural networks by taking the parameter \(\boldsymbol{\mu}\) as additional inputs, along with the coordinates \(\mathbf{x}\) to handle the parameter-dependent PDEs. In addition, the penalty-free techniques [44, 27] are employed to enforce boundary conditions in solving parametric PDEs. Next, we illustrate how to apply penalty-free deep neural networks to solve the parametric state equation (2a), which can be directly generalized to the solution of the adjoint equation (2b).
The key point of the penalty-free method is to introduce two neural networks to approximate the solution, of which one neural network \(\hat{y}_{B}\) is used to approximate the essential boundary conditions and the other \(\hat{y}_{I}\) deals with the rest part of the computational domain. In this way, the training difficulties from the boundary conditions are eliminated, which improves the accuracy and robustness for complex geometries. For problems with simple geometries, we can also construct an analytical expression for \(\hat{y}_{B}\) to further reduce the training cost (see Section 5 for examples). The approximate solution of the state equation is constructed by
\[\hat{y}\left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{y}\right)=\hat{y }_{B}(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{y_{B}})+\ell(\mathbf{x }(\boldsymbol{\mu}))\hat{y}_{I}\left(\mathbf{x}(\boldsymbol{\mu}); \boldsymbol{\theta}_{y_{I}}\right), \tag{3}\]
where \(\boldsymbol{\theta}_{y}=\{\boldsymbol{\theta}_{y_{B}},\boldsymbol{\theta}_{y_{I}}\}\) collects all parameters of two sub-neural networks \(\hat{y}_{B}\) and \(\hat{y}_{I}\), and \(\ell\) is a length factor function that builds the connection between \(\hat{y}_{B}\) and \(\hat{y}_{I}\), satisfying the following two conditions:
\[\left\{\begin{array}{l}\ell(\mathbf{x}(\boldsymbol{\mu}))>0,\quad\text{in} \;\Omega(\boldsymbol{\mu}),\\ \ell(\mathbf{x}(\boldsymbol{\mu}))=0,\quad\text{on}\;\partial\Omega( \boldsymbol{\mu}).\end{array}\right.\]
The details of constructing the length factor function \(\ell\) can be found in ref.[44]. With these settings, training \(\hat{y}_{B}\) and \(\hat{y}_{I}\) can be performed separately, i.e., one can first train \(\hat{y}_{B}\), and then fix \(\hat{y}_{B}\) to train \(\hat{y}_{I}\). For a fixed \(u(\boldsymbol{\mu})\), we have
\[\mathbf{F}(\hat{y}\left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{y} \right),u(\boldsymbol{\mu});\boldsymbol{\mu})=\left[\begin{array}{l} \mathbf{F}_{I}(\hat{y}\left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{y} \right),u(\boldsymbol{\mu});\boldsymbol{\mu})\\ \mathbf{F}_{B}(\hat{y}_{B}(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{y_{B }}),u(\boldsymbol{\mu});\boldsymbol{\mu})\end{array}\right],\]
and the residual of the state equation can be rewritten as
\[r_{s}(\hat{y}\left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{y}\right),u (\boldsymbol{\mu});\boldsymbol{\mu})=\left[\begin{array}{l}r_{s_{I}}(\hat{y} \left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{y}\right),u( \boldsymbol{\mu});\boldsymbol{\mu})\\ r_{s_{B}}(\hat{y}_{B}(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{y_{B}}),u( \boldsymbol{\mu});\boldsymbol{\mu})\end{array}\right].\]
We then sample a set \(\{\mathbf{x}(\boldsymbol{\mu})_{i}\}_{i=1}^{N}\) of collocation points to optimize \(\boldsymbol{\theta}_{y}\) through minimizing the state loss function (1a) if \(\boldsymbol{\theta}_{u}\) is fixed.
For parametric problems, we take parameters as the additional inputs of neural networks. This approach is used to solve parametric forward problems [24] and control problems [49]. A typical way for sampling training points is to separately sample data in \(\Omega\) and \(\mathcal{P}\) to get \(\{\mathbf{x}_{i}\}\) and \(\{\boldsymbol{\mu}_{j}\}\), and then compose product data \(\{(\mathbf{x}_{i},\boldsymbol{\mu}_{j})\}\) in \(\Omega\times\mathcal{P}\). Rather than taken from each slice of \(\Omega(\boldsymbol{\mu})\) for a fixed \(\boldsymbol{\mu}\), collocation points are sampled in space \(\Omega_{\mathcal{P}}\) in this work, where
\[\Omega_{\mathcal{P}}=\{\mathbf{x}(\boldsymbol{\mu}):\mathbf{x}\in\Omega( \boldsymbol{\mu})\}\]
represents the joint spatio-parametric domain. The reason is that, for parametric geometry problems, as the spatial domain \(\Omega(\boldsymbol{\mu})\) is parameter-dependent, the sampling space cannot be expressed as the Cartesian product of \(\Omega\) and \(\mathcal{P}\).
### Projection gradient descent
Due to the additional inequality constraints \(u\in U_{ad}(\boldsymbol{\mu})\) for the control function, the zero gradient condition \(\mathrm{d}_{u}J(y^{*}(\boldsymbol{\mu}),u^{*}(\boldsymbol{\mu});\boldsymbol{ \mu})=0\) cannot be directly applied to the optimal solution to get the update scheme for \(u\). One way to resolve this issue is to introduce additional Lagrange multipliers with some slack variables to handle the inequality constraints. However, this will bring additional penalty terms that could affect the procedure of optimization [7]. Furthermore, the inequality constraints also lead to the non-smoothness of the control function, making it more difficult to capture the singularity by penalty methods [29, 15]. To avoid these issues, we here use a simple iterative method to handle the variational inequality without utilizing a Lagrange multiplier, where a projection gradient descent method is employed, based on which we can obtain the update scheme for \(u\). The projection operator onto the admissible set \(U_{ad}(\boldsymbol{\mu})\) is defined as:
\[\mathbf{P}_{U_{ad}(\boldsymbol{\mu})}(u(\boldsymbol{\mu}))=\arg\min_{v( \boldsymbol{\mu})\in U_{ad}(\boldsymbol{\mu})}\|u(\boldsymbol{\mu})-v( \boldsymbol{\mu})\|_{2},\]
which performs the projection of \(u(\boldsymbol{\mu})\) onto the convex set \(U_{ad}(\boldsymbol{\mu})\). In practice, the above projection is implemented in a finite dimensional vector space, i.e., \(u(\boldsymbol{\mu})\) is discretized on a set of collocation points (e.g. grids on the domain \(\Omega\)). So it is straightforward to build this projection since \(U_{ad}(\boldsymbol{\mu})\) is a convex set. For example, if
\[U_{ad}(\boldsymbol{\mu})=\{u\in U:u_{a}(\mathbf{x}(\boldsymbol{\mu}))\leq u( \mathbf{x}(\boldsymbol{\mu}))\leq u_{b}(\mathbf{x}(\boldsymbol{\mu})), \forall\mathbf{x}\in\Omega(\boldsymbol{\mu})\} \tag{2}\]
provides a box constraint for \(u\), where \(u_{a}\) and \(u_{b}\) are the lower bound function and the upper bound function respectively, both of which are dependent on \(\boldsymbol{\mu}\), and \([u_{1},\ldots,u_{N}]^{\mathrm{T}}\) represents the control function values at \(N\) collocation points \(\{\mathbf{x}(\boldsymbol{\mu})_{i}\}_{i=1}^{N}\) in \(\Omega_{\mathcal{P}}\), then we can construct the projection \(\mathbf{P}_{U_{ad}(\boldsymbol{\mu})}\) in an entry-wise way [52, 19]:
\[\mathbf{P}_{U_{ad}(\boldsymbol{\mu})}(u_{i})=\begin{cases}u_{a}(\mathbf{x}( \boldsymbol{\mu})_{i}),\quad\text{if }u_{i}<u_{a}(\mathbf{x}(\boldsymbol{\mu})_{i}),\\ u_{i},\quad\text{if }u_{b}(\mathbf{x}(\boldsymbol{\mu})_{i})\geq u_{i}\geq u_{a}( \mathbf{x}(\boldsymbol{\mu})_{i}),\quad i=1,\ldots,N.\\ u_{b}(\mathbf{x}(\boldsymbol{\mu})_{i}),\quad\text{if }u_{i}>u_{b}(\mathbf{x}( \boldsymbol{\mu})_{i}).\end{cases} \tag{3}\]
The projection gradient step can be carried out according to the above formula, so as to obtain the update of the control function denoted by \(u_{\mathsf{step}}\), which is
\[u_{\mathsf{step}}(\boldsymbol{\mu})=\mathbf{P}_{U_{ad}(\boldsymbol{\mu})}\left( u(\boldsymbol{\mu})-\mathrm{cd}_{u}J(y(\boldsymbol{\mu}),u(\boldsymbol{\mu}); \boldsymbol{\mu})\right), \tag{4}\]
and the loss for updating the control function is naturally defined as in (1c), making an approximation of \(\hat{u}\) obtained through minimizing (1c).
The optimal control function \(u^{*}(\boldsymbol{\mu})\) satisfies the following variational property:
\[u^{*}(\boldsymbol{\mu})-\mathbf{P}_{U_{ad}(\boldsymbol{\mu})}\left(u^{*}( \boldsymbol{\mu})-\mathrm{cd}_{u}J(y^{*}(\boldsymbol{\mu}),u^{*}(\boldsymbol{ \mu});\boldsymbol{\mu})\right)=0,\quad\forall c\geq 0.\]
Here \(\mathrm{d}_{u}J\) is associated with the adjoint function \(p^{*}(\boldsymbol{\mu})\) from total derivative expression (3), and thus we define the residual for the control function
\[r_{v}(y(\boldsymbol{\mu}),u(\boldsymbol{\mu}),p(\boldsymbol{\mu}),c;\boldsymbol {\mu})\triangleq u(\boldsymbol{\mu})-\mathbf{P}_{U_{ad}(\boldsymbol{\mu})} \left(u(\boldsymbol{\mu})-c\mathrm{d}_{u}J(y(\boldsymbol{\mu}),u(\boldsymbol{ \mu});\boldsymbol{\mu})\right), \tag{5}\]
and its corresponding variational loss is defined as
\[\mathcal{L}_{v}(\boldsymbol{\theta}_{y},\boldsymbol{\theta}_{u},\boldsymbol{ \theta}_{p},c)=\left(\frac{1}{N}\sum_{i=1}^{N}|r_{v}\left(\hat{y}(\mathbf{x}( \boldsymbol{\mu})_{i};\boldsymbol{\theta}_{y}),\hat{u}(\mathbf{x}(\boldsymbol{ \mu})_{i};\boldsymbol{\theta}_{u}),\hat{p}(\mathbf{x}(\boldsymbol{\mu})_{i}; \boldsymbol{\theta}_{p}),c;\boldsymbol{\mu}_{i}\right)|^{2}\right)^{\frac{1}{2}}. \tag{6}\]
The first two losses in (10) together with (11) reflect how well \(y(\mathbf{\mu}),u(\mathbf{\mu})\) and \(p(\mathbf{\mu})\) approximate the optimal solution governed by the KKT system (2). Note that \(r_{v}\) and \(\mathcal{L}_{v}\) are dependent on the constant \(c\), which is actually the step size for gradient descent. For verification, the variational loss is constructed to verify the convergence of algorithm, and \(c\) is often chosen as the last step size.
### AONN algorithm
Now putting all together, we are ready to present our algorithm. Our goal is to efficiently approximate the minimizer of (1) via adjoint-oriented neural networks (AONN). The overall training procedure of AONN consists of three steps: training \(\hat{y}\left(\mathbf{x}(\mathbf{\mu});\mathbf{\theta}_{y}\right)\), updating \(\hat{p}\left(\mathbf{x}(\mathbf{\mu});\mathbf{\theta}_{p}\right)\) and refining \(\hat{u}\left(\mathbf{x}(\mathbf{\mu});\mathbf{\theta}_{u}\right)\). The schematic of AONN for solving the parametric optimal control problems is shown in Figure 1. The three neural networks \(\left(\hat{y}\left(\mathbf{x}(\mathbf{\mu});\mathbf{\theta}_{y}\right),\hat{p}\left( \mathbf{x}(\mathbf{\mu});\mathbf{\theta}_{p}\right),\hat{u}\left(\mathbf{x}(\mathbf{\mu} );\mathbf{\theta}_{u}\right)\right)\) with augmented parametric input, as illustrated in panels A and B, are optimized to iteratively minimizing the objective functional with respect to the corresponding variables once at a time. More specifically, according to the loss functions derived by the three equations shown in panel C, the training procedure is performed as in panel D.
Starting with three initial neural networks \(\hat{y}\left(\mathbf{x}(\mathbf{\mu});\mathbf{\theta}_{y}^{0}\right),\hat{p}\left( \mathbf{x}(\mathbf{\mu});\mathbf{\theta}_{p}^{0}\right)\), and \(\hat{u}\left(\mathbf{x}(\mathbf{\mu});\mathbf{\theta}_{u}^{0}\right)\), we train and obtain the state function \(\hat{y}\left(\mathbf{x}(\mathbf{\mu});\mathbf{\theta}_{y}^{1}\right)\) through minimizing \(\mathcal{L}_{s}\left(\mathbf{\theta}_{y},\mathbf{\theta}_{u}^{0}\right)\) (see (11a)), which is equivalent to solving the parameter-dependent state equation. With \(\hat{y}\left(\mathbf{x}(\mathbf{\mu});\mathbf{\theta}_{y}^{1}\right)\), we minimize the loss \(\mathcal{L}_{a}\left(\mathbf{\theta}_{y}^{1},\mathbf{\theta}_{u}^{0},\mathbf{\theta}_{p}\right)\) (see (11b)) for the adjoint equation to get \(\hat{p}\left(\mathbf{x};\mathbf{\theta}_{p}^{1}\right)\), corresponding to solving the parameter-dependent adjoint equation. To update the control function \(\hat{u}\left(\mathbf{x}(\mathbf{\mu});\mathbf{\theta}_{u}\right)\), \(u_{\mathsf{step}}^{0}(\mathbf{x}(\mathbf{\mu}))\) is computed first by gradient descent followed by a projection step (see (12)), and then \(\hat{u}\left(\mathbf{x}(\mathbf{\mu});\mathbf{\theta}_{u}^{1}\right)\) is obtained by minimizing \(\mathcal{L}_{u}(\mathbf{\theta}_{u},u_{\mathsf{step}}^{0}(\mathbf{x}(\mathbf{\mu})))\) (see (11c)). Then another iteration starts using \(\mathbf{\theta}_{y}^{1},\mathbf{\theta}_{p}^{1},\mathbf{\theta}_{u}^{1}\) as the initial parameters. In general, the iterative scheme is specified as follows:
\[\text{training }\hat{y}: \mathbf{\theta}_{y}^{k}=\arg\min_{\mathbf{\theta}_{y}}\mathcal{L}_{s} \left(\mathbf{\theta}_{y},\mathbf{\theta}_{u}^{k-1}\right),\] \[\text{updating }\hat{p}: \mathbf{\theta}_{p}^{k}=\arg\min_{\mathbf{\theta}_{p}}\mathcal{L}_{a} \left(\mathbf{\theta}_{y}^{k},\mathbf{\theta}_{u}^{k-1},\mathbf{\theta}_{p}\right),\] \[\text{refining }\hat{u}: \mathbf{\theta}_{u}^{k}=\arg\min_{\mathbf{\theta}_{u}}\mathcal{L}_{u} \left(\mathbf{\theta}_{u},u_{\mathsf{step}}^{k-1}\right),\]
where
\[u_{\mathsf{step}}^{k-1}(\mathbf{x}(\mathbf{\mu}))=\mathbf{P}_{U_{ad}(\mathbf{\mu})} \left(\hat{u}(\mathbf{x}(\mathbf{\mu});\mathbf{\theta}_{u}^{k-1})-c^{k}\mathrm{d}_{u}J (\hat{y}(\mathbf{x}(\mathbf{\mu});\mathbf{\theta}_{y}^{k}),\hat{u}(\mathbf{x}(\mathbf{\mu} );\mathbf{\theta}_{u}^{k-1});\mathbf{\mu})\right), \tag{12}\]
and
\[\begin{split}\mathrm{d}_{u}J\left(\hat{y}(\mathbf{x}(\mathbf{\mu}); \mathbf{\theta}_{y}^{k}),\hat{u}(\mathbf{x}(\mathbf{\mu});\mathbf{\theta}_{u}^{k-1});\mathbf{ \mu}\right)&=J_{u}\left(\hat{y}(\mathbf{x}(\mathbf{\mu});\mathbf{\theta}_ {y}^{k}),\hat{u}(\mathbf{x}(\mathbf{\mu});\mathbf{\theta}_{u}^{k-1});\mathbf{\mu}\right)\\ &-\mathbf{F}_{u}^{*}\left(\hat{y}(\mathbf{x}(\mathbf{\mu});\mathbf{\theta} _{y}^{k}),\hat{u}(\mathbf{x}(\mathbf{\mu});\mathbf{\theta}_{u}^{k-1});\mathbf{\mu}\right) \hat{p}(\mathbf{x}(\mathbf{\mu});\mathbf{\theta}_{p}^{k}).\end{split} \tag{13}\]
The iteration of AONN revolves around the refinement of \(\hat{u}\) with the aid of \(\hat{y}\) and \(\hat{p}\), forming the direct-adjoint looping (DAL),which is indicated by red lines in Figure 1. This procedure shares similarities to the classical DAL framework, but there is a crucial difference between AONN and DAL. That is, a reliable solution of OCP(\(\mathbf{\mu}\)) for any parameter can be efficiently computed from the trained neural networks in our AONN framework, while DAL cannot achieve this. More details can be found in the discussions of Section 4.
The training process is summerized in Algorithm 1, where the loss function \(\mathcal{L}_{v}(\mathbf{\theta}_{y},\mathbf{\theta}_{u},\mathbf{\theta}_{p},c)\) (see (11)) is used for the verification. In our practical implementation, we employ the step size decay technique with a decay factor \(\gamma\) for robustness. The AONN method can be regarded as an inexact DAL to some extent since the state equation and the adjoint equation are not accurately solved but approximated with neural networks at each iteration. So the number of epochs is increased by \(n_{\mathrm{aug}}\) compared with the previous step (on line 9 of Algorithm 1) to ensure the accuracy and convergence. It is worth noting that the training of network \(\hat{u}\left(\mathbf{x}(\mathbf{\mu});\mathbf{\theta}_{u}^{*}\right)\) can be put after the while loop, if the collocation points are always fixed, since training the state function only uses the value of \(u\) at the collocation points (the calculation of line 5 in Algorithm 1).
**Remark 3.1**.: A post-processing step can be applied to continue training \(\mathbf{\theta}_{y}^{*}\) until a more accurate solution of the state function (or the adjoint function) is found. That is, we can fix \(\hat{u}\left(\mathbf{x};\mathbf{\theta}_{u}^{*}\right)\) and train \(\mathbf{\theta}_{y}^{*}\) by minimizing the state loss (3.1a). We can also fix \(\mathbf{\theta}_{y}^{*},\mathbf{\theta}_{u}^{*}\) and train \(\mathbf{\theta}_{p}^{*}\) using (3.1b). The initial step size \(c^{0}\) is crucial for the convergence of Algorithm 1. A large step size may lead to divergence of the algorithm, while a small one could result in slow convergence.
## 4 Comparison with other methods
Unlike solving the deterministic optimal control problems, the existence of parameters in \(\mathrm{OCP}(\mathbf{\mu})\) causes difficulties for traditional grid-based numerical methods. A straightforward way is to convert the \(\mathrm{OCP}(\mathbf{\mu})\) into the deterministic optimal control problem. For each realization of
Figure 1: The schematic of AONN for solving the parametric optimal control problems. (A) _Spatial coordinates and parameters form the input of neural networks._ (B) _AON consists of three separate neural networks \(\hat{y},\hat{p},\hat{u}\) and return the approximation of state, adjoint and control respectively._ (C) _The state equation, the adjoint equation and the projected gradient equation are derived to formulate the corresponding loss functions._ (D) _The gradients in the state PDE and the adjoint PDE are computed via automatic differentiation [38]. \(\hat{y},\hat{p},\hat{u}\) are then trained sequentially via the Adam [25] or the BFGS optimizer.
parameters, the OCP(\(\boldsymbol{\mu}\)) is reduced to the following
\[\text{OCP}:\quad\begin{cases}\min_{(y,u)\in Y\times U}J(y,u),\\ \text{s.t. }\mathbf{F}(y,u)=0\ \ \text{in}\ \Omega,\ \ \text{and}\ u\in U_{ad}. \end{cases} \tag{11}\]
The classical direct-adjoint looping (DAL) method [32, 21] is a popular approach for solving this problem, where an iterative scheme is adopted to converge toward the optimal solution by solving subproblems in the KKT system with numerical solvers (e.g. finite element methods). At each iteration in the direct-adjoint looping procedure, one first solves the governing PDE (10a) and then solves the adjoint PDE (10b) which formulates the total gradient (10c) for the update of the control function.
\[\mathbf{F}(y,u)=0, \tag{12a}\] \[J_{y}(y,u)-\mathbf{F}_{y}^{*}(y,u)p=0,\] (12b) \[\text{d}_{u}J(y,u)=J_{u}(y,u)-\mathbf{F}_{u}^{*}(y,u)p. \tag{12c}\]
Despite its effectiveness, DAL is not able to handle the OCP(\(\boldsymbol{\mu}\)) problem, directly due to the curse of dimensionality of the discretization of \(\Omega_{\mathcal{P}}\). An alternative strategy is the reduced order model (ROM) [36], which rely on surrogate models for parameter-dependent PDEs. The idea is that the solution of PDE for any parameter can be computed based on a few basis functions that are constructed from the solutions corresponding to some pre-selected parameters. However, it is still computationally unaffordable for ROM when the parameter-induced solution manifold does not lie on a low-dimensional subspace.
Recently, some deep learning algorithms are used to solve the optimal control problem for a fixed parameter [29, 34]. By introducing two deep neural networks, the state function \(y\) and the control function \(u\) can be approximated by minimizing the following objective functional:
\[\min_{(y,u)\in Y\times U}J(y,u)+\beta_{1}\mathbf{F}(y,u)^{2}+\beta_{2}\|u- \mathbf{P}_{U_{ad}}(u)\|_{U}, \tag{13}\]
where two penalty terms are added, and \(\beta=(\beta_{1},\beta_{2})\) are two parameters that need tuning. As the penalty parameters increase to \(+\infty\), the solution set of this unconstrained problem approaches to the solution set of the constrained one. However, this penalty approach has a serious drawback. On the one hand, as the penalty parameters increase, the optimal solution becomes increasingly difficult to obtain. On the other hand, the constraint is not satisfied well if the penalty parameter is small. To alleviate this difficulty, one can use hPINN [29] which employs the augmented Lagrangian method to solve (13). However, it is still challenging to directly extend this approach to OCP(\(\boldsymbol{\mu}\)) due to the presence of parameters. This is because it is extremely hard to optimize a series of objective functionals with a continuous range of parameters simultaneously.
### PINN for OCP(\(\boldsymbol{\mu}\))
For handling the parametric optimal control problems, an extended PINN method [11] with augmented inputs is used to obtain a more accurate parametric prediction. That is, the inputs of the neural networks consist of two parts: the spatial coordinates and the parameters. The optimal solution for any parameter is approximated by a deep neural network that is obtained from solving the parameter-dependent KKT system (2). In particular, when there is no restriction on the control function \(u(\boldsymbol{\mu})\) (e.g., \(U_{ad}(\boldsymbol{\mu})\) is the full Banach space), the KKT system is
\[\mathcal{F}(y(\boldsymbol{\mu}),u(\boldsymbol{\mu}),p(\boldsymbol{\mu}); \boldsymbol{\mu})=\begin{bmatrix}J_{y}(y(\boldsymbol{\mu}),u(\boldsymbol{\mu}) ;\boldsymbol{\mu})-\mathbf{F}_{y}^{*}(y(\boldsymbol{\mu}),u(\boldsymbol{\mu}) ;\boldsymbol{\mu})p(\boldsymbol{\mu})\\ \text{F}(y(\boldsymbol{\mu}),u(\boldsymbol{\mu});\boldsymbol{\mu})\\ \text{d}_{u}J(y(\boldsymbol{\mu}),u(\boldsymbol{\mu});\boldsymbol{\mu})\end{bmatrix} =\mathbf{0}, \tag{14}\]
where the total gradient \(d_{u}J\) is given in (3). In such cases, one can use the PINN algorithm to obtain the optimal solution through minimizing the least-square loss derived from the KKT system. Nevertheless, to apply this method to the cases where there are some additional constraints on the control function \(u\), such as the box constraint (11), one may need to introduce the Lagrange multipliers \(\lambda(\boldsymbol{\mu})=(\lambda_{a}(\boldsymbol{\mu}),\lambda_{b}( \boldsymbol{\mu}))\) corresponding to the
constraints \(u(\mathbf{\mu})\geq u_{a}(\mathbf{\mu})\) and \(u(\mathbf{\mu})\leq u_{b}(\mathbf{\mu})\). For such cases, the KKT system is
\[\mathcal{F}(y(\mathbf{\mu}),u(\mathbf{\mu}),p(\mathbf{\mu}),\lambda(\mathbf{\mu}); \mathbf{\mu}) =\begin{bmatrix}J_{y}(y(\mathbf{\mu}),u(\mathbf{\mu});\mathbf{\mu})-\mathbf{F }_{y}^{*}(y(\mathbf{\mu}),u(\mathbf{\mu});\mathbf{\mu})p(\mathbf{\mu})\\ \mathbf{F}(y(\mathbf{\mu}),u(\mathbf{\mu});\mathbf{\mu})\\ \mathrm{d}_{u}J(y(\mathbf{\mu}),u(\mathbf{\mu});\mathbf{\mu})-\lambda_{a}(\mathbf{\mu})+ \lambda_{b}(\mathbf{\mu})\\ \lambda_{a}(\mathbf{\mu})(u_{a}(\mathbf{\mu})-u(\mathbf{\mu}))\\ \lambda_{b}(\mathbf{\mu})(u_{b}(\mathbf{\mu})-u(\mathbf{\mu}))\end{bmatrix}=\mathbf{0},\] (11a) and \[\quad\begin{cases}u_{a}(\mathbf{\mu})\leq u(\mathbf{\mu})\leq u_{b}(\mathbf{ \mu}),\\ \lambda_{a}(\mathbf{\mu})\geq 0,\lambda_{b}(\mathbf{\mu})\geq 0.\end{cases} \tag{11b}\]
Applying the framework of PINN to solve the system (11) needs to deal with several penalty terms in the loss function including the penalties of equality terms (11a) and inequality terms (11b), leading to an inaccurate solution even for the problem with fixed parameters, which will be presented in the next section. In addition, the extra constraint of \(U_{ad}(\mathbf{\mu})\) often introduces inequality terms and nonlinear terms and brings more singularity to the optimal control function [2, 3], which limits the application of PINN for solving OCP(\(\mathbf{\mu}\)) with control constraints.
### PINN+Projection for OCP(\(\mathbf{\mu}\))
To find a better baseline for comparison, we propose to improve the performance of PINN by introducing a projection operator. In this way, the KKT system (2) can be reformulated to a more compactly stated condition [37]:
\[\mathcal{F}(y(\mathbf{\mu}),u(\mathbf{\mu}),p(\mathbf{\mu}),c;\mathbf{\mu})=\begin{bmatrix}J_ {y}(y(\mathbf{\mu}),u(\mathbf{\mu});\mathbf{\mu})-\mathbf{F}_{y}^{*}(y(\mathbf{\mu}),u(\mathbf{ \mu});\mathbf{\mu})p(\mathbf{\mu})\\ \mathbf{F}(y(\mathbf{\mu}),u(\mathbf{\mu});\mathbf{\mu})\\ u(\mathbf{\mu})-\mathbf{P}_{U_{ad}(\mathbf{\mu})}\left(u(\mathbf{\mu})-c\mathrm{d}_{u}J(y( \mathbf{\mu}),u(\mathbf{\mu});\mathbf{\mu})\right)\end{bmatrix}=\mathbf{0}, \tag{12}\]
where \(c\) could be any positive number. Note that choosing an appropriate \(c\) can accelerate the convergence of the algorithm. For example, the classic way is to choose \(c=1/\alpha\) for canceling out the control function \(u\) inside the projection operator, where \(\alpha\) is the coefficient of the Tikhonov regularization term (see the experiment in Section 5.1). The complementary conditions and inequalities caused by the control constraints are avoided in (12), thus significantly reducing the difficulty of optimization. In this paper, we call the method PINN+Projection, which combines the projection strategy with the KKT system to formulate the PINN residual loss. Although PINN+Projection alleviates the solving difficulty brought by control constraints to PINN, it still has limitations on nonsmooth optimal control problems. For nonsmooth optimization such as sparse \(L_{1}\)-minimization, the KKT system can no longer be described by (12) because of the nondifferentiable property of the \(L_{1}\)-norm [9]. Instead, the dual multiplier for the \(L_{1}\)-cost term is required. The difficulty arises from the third nonsmooth variational equation of (12), which makes the neural network difficult to train. AONN reduces this difficulty by leveraging the update scheme in the DAL method without the implicit variational equation in (12). Numerical results also show that the KKT system (12) cannot be directly used to formulate the loss functions of neural networks. Such results of \(L_{1}\)-minimization involved in OCP(\(\mathbf{\mu}\)) (see Section 5.5) strongly suggest that AONN is a more reliable and efficient framework.
The proposed AONN method has all the advantages of the aforementioned approaches while avoiding their drawbacks. By inheriting the structure of DAL, the AONN method can obtain an accurate solution through solving the KKT system in an alternative minimization iterative manner. So it does not require the Lagrange multipliers corresponding to the additional control constraints and thus can reduce the storage cost as well as improve the accuracy. Moreover, AONN can accurately approximate the optimal solutions of parametric optimal control problems for any parameter and can be generalized to cases with high-dimensional parameters.
## 5 Numerical study
In this section, we present results of five numerical experiments to illustrate the effectiveness of AONN, where different types of PDE constraints, objective functionals and control constraints under different parametric settings are studied. In the following, AONN is first validated by solving OCP, and further applied to solving OCP(\(\mathbf{\mu}\)) with continuous parameters changing over a specific interval. For comparison purposes, we also use the PINN method and the PINN+Projecton method to solve OCP(\(\mathbf{\mu}\)). We employ the ResNet model [17] with sinusoid activation functions to build the neural networks for AONN and other neural network based algorithms. Unless otherwise specified, the quasi Monte-Carlo method is used to generate collocation points from \(\Omega_{\mathcal{P}}\) by calling the SciPy module [53]. Analytical length factor functions (see (11)) are
constructed for all test problems to make the approximate solution naturally satisfy Dirichlet boundary conditions. The training of neural networks is performed on a Geforce RTX 2080 GPU with PyTorch 1.8.1. The Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm with a strong Wolfe line search strategy is used to update the neural network parameters to speed up the convergence, where the maximal number of iterations for BFGS is set to 100.
### Test 1: Optimal control for the semilinear elliptic equations
We start with the following nonparametric optimal control problem:
\[\left\{\begin{aligned} &\min_{y,u}J(y,u):=\frac{1}{2}\left\|y-y_{d} \right\|_{L_{2}(\Omega)}^{2}+\frac{\alpha}{2}\|u\|_{L_{2}(\Omega)}^{2},\\ &\text{subject to }\left\{\begin{aligned} -\Delta y+y^{3}=u+f& \text{ in }\Omega\\ y=0&\text{ on }\partial\Omega,\\ \end{aligned}\right.\\ &\text{and }\quad u_{a}\leq u\leq u_{b}\quad\text{ a.e. in }\Omega.\end{aligned}\right. \tag{10}\]
The total derivative of \(J\) with respect to \(u\) is \(\mathrm{d}_{u}J(y,u)=\alpha u+p\), where \(p\) is the solution of the corresponding adjoint equation:
\[\left\{\begin{aligned} -\Delta p+3py^{2}&=y-y_{d}& \text{ in }\Omega,\\ p&=0&\text{ on }\partial\Omega.\end{aligned}\right. \tag{11}\]
We take the same configuration as in ref.[14], where \(\Omega=(0,1)^{2}\), \(\alpha=0.01\), \(u_{a}=0\), and \(u_{b}=3\). The analytical optimal solution is given by
\[y^{*} =\sin\left(\pi x_{1}\right)\sin\left(\pi x_{2}\right), \tag{12}\] \[u^{*} =\mathbf{P}_{[u_{a},u_{b}]}(2\pi^{2}y^{*}),\] \[p^{*} =-2\alpha\pi^{2}y^{*},\]
where \(\mathbf{P}_{[u_{a},u_{b}]}\) is the pointwise projection operator onto the interval \([u_{a},u_{b}]\). The desired state \(y_{d}=(1+4\pi^{4}\alpha)y^{*}-3y^{*2}p^{*}\) and the source term \(f=2\pi^{2}y^{*}+y^{*3}-u^{*}\) are given to satisfy the state equation and the adjoint equation.
To solve the optimal control problem with AONN, we construct three networks \(\hat{y}_{I}\left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{y_{I}} \right),\hat{p}_{I}\left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{y_{ I}}\right)\) and \(\hat{u}\left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{u}\right)\), whose network structures are all comprised of two ResNet blocks, each of which contains two fully connected layers with 15 neurons and a residual connection. We randomly sample \(N=4096\) points inside \(\Omega\) to form the training set. A uniform meshgrid with size \(256\times 256\) in \(\Omega\) is generated for testing and visualization. We use fixed step size and training epochs in subproblems, i.e. \(c^{k}\equiv 1/\alpha=100,n^{k}\equiv 500\). The loss behavior and the relative error \(\|u-u^{*}\|/\|u^{*}\|\) with \(\ell_{2}\)-norm and \(\ell_{\infty}\)-norm are reported in Figure 1(a), while Figure 1(b) evaluates the difference between the AONN solution and the analytical solution. As reported in ref.[14], to achieve the error \(1\times 10^{-4}\) in the \(\ell_{2}\) sense requires 7733 degrees of freedom with the finite element method, while the AONN method needs only 781 neural network parameters to approximate the control function.
Test 2: Optimal control for the semilinear elliptic equations with control constraint parametrization
We then consider the same optimal control problem with control constraint parametrization. The control constraint upper bound \(u_{b}\) is set to be a continuous variable \(\boldsymbol{\mu}\) ranging from 3 to 20 instead of a fixed number. Thus (10) actually constructs a series of optimal control problems and the optimal solutions (12) are dependent on \(\boldsymbol{\mu}\). We now verify whether the all-at-once solutions can be obtained by AONN when \(\boldsymbol{\mu}\) changes continuously over the interval \(3\leq\boldsymbol{\mu}\leq 20\). We seek optimal \(y(\boldsymbol{\mu}),u(\boldsymbol{\mu})\) defined by the following problem:
\[\left\{\begin{aligned} &\min_{y(\boldsymbol{\mu}),u( \boldsymbol{\mu})}J(y(\boldsymbol{\mu}),u(\boldsymbol{\mu})):=\frac{1}{2}\left\| y(\boldsymbol{\mu})-y_{d}(\boldsymbol{\mu})\right\|_{L_{2}(\Omega)}^{2}+ \frac{\alpha}{2}\|u(\boldsymbol{\mu})\|_{L_{2}(\Omega)}^{2},\\ &\text{subject to }\left\{\begin{aligned} -\Delta y( \boldsymbol{\mu})+y(\boldsymbol{\mu})^{3}&=u(\boldsymbol{\mu})+f( \boldsymbol{\mu})&\text{ in }\Omega\\ y(\boldsymbol{\mu})&=0&\text{ on }\partial\Omega,\\ \end{aligned}\right.\\ &\text{and }\quad u_{a}\leq u(\boldsymbol{\mu})\leq \boldsymbol{\mu}&\text{ a.e. in }\Omega.\end{aligned}\right. \tag{13}\]
To naturally satisfy the homogeneous Dirichlet boundary conditions in the state equation and the adjoint equation, three neural networks for approximating the AONN solutions of OCP(\(\boldsymbol{\mu}\)) (13) are defined as follows:
\[\hat{y}\left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{y_ {I}}\right) =\ell(\mathbf{x})\hat{y}_{I}\left(\mathbf{x}(\boldsymbol{\mu}); \boldsymbol{\theta}_{y_{I}}\right), \tag{14}\] \[\hat{p}\left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{y_ {I}}\right) =\ell(\mathbf{x})\hat{p}_{I}\left(\mathbf{x}(\boldsymbol{\mu}); \boldsymbol{\theta}_{y_{I}}\right),\] \[\hat{u}\left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{y_ {I}}\right) =\hat{u}_{I}\left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{u} \right),\]
where the length factor function is formed by
\[\ell(\mathbf{x})=x_{0}(1-x_{0})x_{1}(1-x_{1}). \tag{10}\]
The network structures of \(\hat{y}_{I}\left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{y_{I}}\right), \hat{p}_{I}\left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{p_{I}}\right)\) and \(\hat{u}\left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{u}\right)\) are the same as those of the previous test except for the input dimension being \(3\) and the number of neurons in each hidden layer being \(20\), resulting in \(1361\) undecided parameters. To evaluate the loss, we sample \(N=20480\) points in the spatio-parametric space \(\Omega_{\mathcal{P}}\). We keep the same step size \(c^{k}\) and training epochs \(n^{k}\) as the previous test, and perform \(N_{\text{iter}}=20\) iterations until Algorithm 1 converges. The test errors are computed on the uniform meshgrid with size \(256\times 256\) for each realization of \(\boldsymbol{\mu}\).
In Figure 2, we plot the analytical solutions, the AONN solutions and the PINN solutions for eight equidistant realizations of \(\boldsymbol{\mu}\), where it can be seen that the AONN solutions are better than the PINN solutions in the sense of absolute error. Looking more closely, the large errors are concentrated around the location of singularity of \(u\), i.e., the curve of active constraints \(\left\{\mathbf{x}:u(\mathbf{x}(\boldsymbol{\mu}))=\boldsymbol{\mu}\right\}\), except for the case \(\boldsymbol{\mu}=20\) where the inequality constraint is nonactive, keeping the smoothness of the optimal control function. Note that adaptive sampling strategies [51, 50, 13] may be used to improve the accuracy in the singularity region, which will be left for future study.
### Test 3: Optimal control for the Navier-Stokes equations with physical parametrization
The next test case is the parametric optimal control problem
\[\min_{y(\boldsymbol{\mu}),u(\boldsymbol{\mu})}J(y(\boldsymbol{\mu}),u( \boldsymbol{\mu}))=\frac{1}{2}\left\|y(\boldsymbol{\mu})-y_{d}(\boldsymbol{ \mu})\right\|_{L_{2}(\Omega)}^{2}+\frac{1}{2}\|u(\boldsymbol{\mu})\|_{L_{2}( \Omega)}^{2}, \tag{11}\]
subject to the following steady-state incompressible Navier-Stokes (NS) equations:
\[\left\{\begin{aligned} -\boldsymbol{\mu}\Delta y(\boldsymbol{\mu})+(y( \boldsymbol{\mu})\cdot\nabla)y(\boldsymbol{\mu})+\nabla p(\boldsymbol{\mu})& =u(\boldsymbol{\mu})+f(\boldsymbol{\mu})&\text{ in }\Omega,\\ \text{div}\,y(\boldsymbol{\mu})&=0&\text{ in }\Omega,\\ y(\boldsymbol{\mu})&=0&\text{ on }\partial\Omega,\end{aligned}\right. \tag{12}\]
in \(\Omega=(0,1)^{2}\) with parameter \(\boldsymbol{\mu}\) representing the reciprocal of the Reynolds number. Note that the nonparametric problems without control constraint for \(\boldsymbol{\mu}=0.1\) and \(\boldsymbol{\mu}=1.0\) were studied in refs.[28, 54]. We set the physical parameter \(\boldsymbol{\mu}\in[0.1,100]\) and in addition, we consider the following constraint for \(u(\boldsymbol{\mu})=(u_{1}(\boldsymbol{\mu}),u_{2}(\boldsymbol{\mu}))\):
\[u_{1}(\boldsymbol{\mu})^{2}+u_{2}(\boldsymbol{\mu})^{2}\leq r^{2}, \tag{13}\]
Figure 1: Test 1: training loss and test error of problem 1. Test error is evaluated at 256\(\times\)256 grid points. (a) Loss behaviour measured in terms of (10a)-(10b), and test errors in both \(\ell_{2}\)-norm and \(\ell_{\infty}\)-norm during training process. (b) The AONN solution and its absolute errors compared with the analytical solution.
with \(r=0.2\), posing additional challenges to this problem. The desired state \(y_{d}(\mathbf{\mu})\) and the source term \(f(\mathbf{\mu})\) are given in advance to ensure that the analytical solution of the above \(\mathrm{OCP}(\mathbf{\mu})\) is given by
\[y^{*}(\mathbf{\mu}) =e^{-0.05\mathbf{\mu}}\left(\begin{array}{c}\sin^{2}\pi x_{1}\sin \pi x_{2}\cos\pi x_{2}\\ -\sin^{2}\pi x_{2}\sin\pi x_{1}\cos\pi x_{1}\end{array}\right),\] \[\lambda^{*}(\mathbf{\mu}) =\left(e^{-0.05\mathbf{\mu}}-e^{-\mathbf{\mu}}\right)\left(\begin{array} []{c}\sin^{2}\pi x_{1}\sin\pi x_{2}\cos\pi x_{2}\\ -\sin^{2}\pi x_{2}\sin\pi x_{1}\cos\pi x_{1},\end{array}\right).\]
The adjoint equation is specified as
\[\left\{\begin{aligned} -\mathbf{\mu}\Delta\lambda(\mathbf{\mu})-(y(\mathbf{\mu}) \cdot\nabla)\lambda(\mathbf{\mu})+(\nabla y(\mathbf{\mu}))^{T}\lambda(\mathbf{\mu})+ \nabla\nu(\mathbf{\mu})&=y(\mathbf{\mu})-y_{d}(\mathbf{\mu})&\text{ in }\Omega,\\ \mathrm{div}\,\lambda(\mathbf{\mu})&=0&\text{ in }\Omega,\\ \lambda(\mathbf{\mu})&=0&\text{ on }\partial\Omega,\end{aligned}\right. \tag{5.10}\]
where \(\lambda(\mathbf{\mu})\) denotes the adjoint velocity and \(\nu(\mathbf{\mu})\) denotes the adjoint pressure. The optimal pressure and adjoint pressure \(p^{*}(\mathbf{\mu}),\nu^{*}(\mathbf{\mu})\) are both zero. In order to satisfy the state equation and the adjoint equation, \(y_{d}(\mathbf{\mu})\) and \(f(\mathbf{\mu})\) are chosen as
\[f(\mathbf{\mu}) =-\mathbf{\mu}\Delta y^{*}(\mathbf{\mu})+(y^{*}(\mathbf{\mu})\cdot\nabla)y^{* }(\mathbf{\mu})-u^{*}(\mathbf{\mu}), \tag{5.11}\] \[y_{d}(\mathbf{\mu}) =y^{*}(\mathbf{\mu})-\left(-\mathbf{\mu}\Delta\lambda^{*}(\mathbf{\mu})+(y^{* }(\mathbf{\mu})\cdot\nabla)\lambda^{*}(\mathbf{\mu})-(\nabla y^{*}(\mathbf{\mu}))^{T} \lambda^{*}(\mathbf{\mu})\right).\]
It is easy to check that the optimal control is \(u^{*}(\mathbf{\mu})=\mathbf{P}_{B(0,r)}(\lambda^{*}(\mathbf{\mu}))\), where \(B(0,r)\) is a ball centered at the origin of radius \(r\). The state equation (5.8) and the adjoint equation (5.10) together with the variational inequality where \(\mathrm{d}_{u}J(\mathbf{\mu})=u(\mathbf{\mu})-\lambda(\mathbf{\mu})\) formulate the optimality system.
For AONN, we use a neural network to approximate \(y\), and it is constructed by two ResNet blocks, each of which contains two fully connected layers with 20 units and a residual connection, resulting in 1382 parameters. The neural network for approximating \(p\) has two ResNet blocks built by two fully connected layers with 10 units, resulting in 381 parameters. The architectures of the neural networks for \(\lambda\) and \(\nu\) are the same as those of \(y\) and \(p\) respectively. We select \(N=20000\) randomly sampled points in the spatio-parametric space \(\Omega_{\mathcal{P}}\). The maximum iteration number in Algorithm 3.1 is set to \(N_{\mathrm{iter}}=300\) and the step size is \(c^{k}\equiv c^{0}=1.0\). We choose an initial training epoch \(n^{0}=200\) and increase it by \(n_{\mathrm{aug}}=100\) after every 100 iterations. For the PINN method, the architectures of the neural networks are the same as those of AONN except for adding another neural network
Figure 5.2: Test 2: the control solutions \(u(\mathbf{\mu})\) of AONN and PINN with eight realizations of \(\mathbf{\mu}\in[3,20]\), and their absolute errors.
for \(\zeta(\mathbf{\mu})\) to satisfy the following KKT system:
\[\left\{\begin{array}{l}\text{state equation }(\ref{eq:KKT}),\\ \text{adjoint equation }(\ref{eq:KKT}),\\ u_{1}(\mathbf{\mu})-\lambda_{1}(\mathbf{\mu})+2u_{1}(\mathbf{\mu})\zeta(\mathbf{\mu})=0,\\ u_{2}(\mathbf{\mu})-\lambda_{2}(\mathbf{\mu})+2u_{2}(\mathbf{\mu})\zeta(\mathbf{\mu})=0,\\ (u_{1}(\mathbf{\mu})^{2}+u_{2}(\mathbf{\mu})^{2}-r^{2})\zeta(\mathbf{\mu})=0,\\ u_{1}(\mathbf{\mu})^{2}+u_{2}(\mathbf{\mu})^{2}\leq r^{2},\zeta(\mathbf{\mu})\geq 0, \end{array}\right. \tag{5.12}\]
where \(\zeta(\mathbf{\mu})\) is the Lagrange multiplier of the control constraint (5.9).
We compare the solutions of AONN with those obtained using PINN and plot their absolute errors in Figure 5, where it shows the control function \(u=(u_{1},u_{2})\) for a representative parameter \(\mathbf{\mu}=10\). From the figure, it can be seen that AONN can obtain a more accurate optimal control function than that of PINN, even when the training of PINN costs more epochs than that of AONN. Also, the quadratic constraint is not satisfied well for the PINN solution because there are more penalties from the KKT system (5.12) for the PINN loss. We compute the relative error \(\|u-u^{*}\|/\|u^{*}\|\) on a uniform \(256\times 256\) meshgrid for each parameter \(\mathbf{\mu}\) and plot the results in Figure 6. For most of the parameters, the relative errors of the AONN solutions are smaller than that of PINN, indicating that AONN is more effective and efficient than PINN in solving parametric optimal control problems. Note that this problem becomes harder when the parameter \(\mathbf{\mu}\) gets smaller [54]. In particular, the relative errors of AONN and PINN are both large as \(\mathbf{\mu}\) closes to \(0.1\).
### Test 4: Optimal control for the Laplace equation with geometrical parametrization
In this test case, we are going to solve the following parametric optimal control problem:
\[\left\{\begin{aligned} &\min_{y(\boldsymbol{\mu}),u(\boldsymbol{\mu})}J \left(y(\boldsymbol{\mu}),u(\boldsymbol{\mu})\right)=\frac{1}{2}\left\|y( \boldsymbol{\mu})-y_{d}(\boldsymbol{\mu})\right\|_{L_{2}(\Omega(\boldsymbol{ \mu}))}^{2}+\frac{\alpha}{2}\left\|u(\boldsymbol{\mu})\right\|_{L_{2}(\Omega( \boldsymbol{\mu}))}^{2},\\ &\text{subject to }\begin{cases}-\Delta y(\boldsymbol{\mu})=u( \boldsymbol{\mu})&\text{ in }\Omega(\boldsymbol{\mu}),\\ y(\boldsymbol{\mu})=1&\text{ on }\partial\Omega(\boldsymbol{\mu}),\\ \text{and }&u_{a}\leq u(\boldsymbol{\mu})\leq u_{b}&\text{ a.e. in }\Omega(\boldsymbol{\mu}),\end{cases}\right.\end{aligned}\right. \tag{5.13}\]
where \(\boldsymbol{\mu}=(\mu_{1},\mu_{2})\) represents the geometrical and desired state parameters. The parametric computational domain is \(\Omega(\boldsymbol{\mu})=([0,2]\times[0,1])\backslash B((1.5,0.5),\mu_{1})\) and the desired state is given by
\[y_{d}(\boldsymbol{\mu})=\begin{cases}1&\text{ in }\Omega_{1}=[0,1]\times[0,1], \\ \mu_{2}&\text{ in }\Omega_{2}(\boldsymbol{\mu})=([1,2]\times[0,1]) \backslash B((1.5,0.5),\mu_{1}),\end{cases} \tag{5.14}\]
where \(B((1.5,0.5),\mu_{1})\) is a ball of radius \(\mu_{1}\) with center \((1.5,0.5)\). We set \(\alpha=0.001\) and the parameter interval to be \(\boldsymbol{\mu}\in\mathcal{P}=[0.05,0.45]\times[0.5,2.5]\).
This test case is inspired by the literature [36, 23] that involve the application of local hyperthermia treatment of cancer. In such case, it is expected to achieve a certain temperature field in the tumor area and another temperature field in the non-lesion area through the heat source control. The circle cut out from the rectangular area represents a certain body organ, and by using AONN we aim to obtain all-at-once solutions of the optimal heat source control for different expected temperature fields and different organ shapes. In particular, we consider a two-dimensional model problem corresponding to the hyperthermia cancer treatment. One difficulty of this problem is the geometrical parameter \(\mu_{1}\) that leads to various computational domains, which causes difficulties in applying traditional mesh-based numerical methods. In the AONN framework, we can solve this problem by sampling in the spatio-parametric space:
\[\Omega_{\mathcal{P}}=\{(x_{0},x_{1},\mu_{1},\mu_{2})|0\leq x_{0}\leq 2,\,0\leq x _{1}\leq 1,\,0.05\leq\mu_{1}\leq 0.45,\,0.5\leq\mu_{2}\leq 2.5,\,(x_{0}-1.5)^{2}+(x _{1}-0.5)^{2}\geq\mu_{1}^{2}\}.\]
The computational domain \(\Omega(\boldsymbol{\mu})\) as well as the 40000 training points are given in Figure 5.5(a) and Figure 5.5(b).
The state neural network \(\hat{y}\) is constructed by \(\hat{y}\left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{y_{I}}\right)= \ell(\mathbf{x},\boldsymbol{\mu})\hat{y}_{I}\left(\mathbf{x}(\boldsymbol{\mu}); \boldsymbol{\theta}_{y_{I}}\right)+1\) to naturally satisfy the Dirichlet boundary condition (5.13), where the length factor function is
\[\ell(\mathbf{x},\boldsymbol{\mu})=x_{0}(2-x_{0})x_{1}(1-x_{1})(\mu_{1}^{2}-(x_ {0}-1.5)^{2}-(x_{1}-0.5)^{2}).\]
The three neural networks \(\hat{y}_{I}\left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{y_{I}} \right),\hat{p}_{I}\left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{p_{ I}}\right)\) and \(\hat{u}\left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{u}\right)\) are comprised of three ResNet blocks, each of which contains two fully connected layers with 25 units and a residual connection. The input dimension of these three neural networks is 4 and the total number of parameters of these three neural networks is \(3\times 3401=10203\). We take \(\gamma=0.985\) and the number of epochs for training the state function and the adjoint function increases from 200 to 700 during training. The configurations of the neural networks for the PINN and PINN+Projection methods are the same as those of AONN, and the number of training epoch is 50000. For this test problem, the AONN algorithm converges in 300 steps. Note that our AONN method can obtain all-at-once solution for any parameter \(\boldsymbol{\mu}\). To evaluate the performance of AONN, we employ the classical finite element method to solve the OCP\((\boldsymbol{\mu})\) with a fixed parameter. More specifically, a limited-memory BFGS algorithm implemented with bounded support is adopted in the dofini-adjoint [33] to solve the corresponding OCP. The solution obtained using the dofini-adjoint can be regarded as the ground truth. Among the four methods, AONN, PINN and PINN+Projection are able to solve parametric optimal control problems, while the dofini-adjoint solver can only solve the optimal control problem with a fixed parameter.
Figure 5.6 shows the optimal control solution obtained using AONN for the parametric optimal control problem (5.13). We choose several different parameters \(\boldsymbol{\mu}\) for visualization. The left column of Figure 5.6 corresponds to \(\mu_{2}=1\), in which case the optimal control is exactly zero because the desired state is achievable for \(y=y_{d}\equiv 1\). The middle and right column of Figure 5.6 indicate that the decrease of \(\mu_{1}\) and increase of \(\mu_{2}\) could increase the magnitude of \(u\). The results obtained by the dofini-adjoint solver, AONN, PINN and PINN+Projection with different values of \(c\) are displayed in Figure 5.7, where the control functions at \(\boldsymbol{\mu}=(0.3,2.5)\) are compared. The mesh with 138604 triangular elements are used in the dofini-adjoint solver, and after 16 steps, the final projected gradient norm achieves \(2.379\times 10^{-10}\). Figure 5.7 shows that AONN can
converge to the reference solution obtained by the dolfin-adjoint solver but PINN cannot obtain an accurate solution, while the results of PINN+Projection depends heavily on the choice of \(c\) in (4.6). When \(c\) is not equal to \(1/\alpha=1000\), the PINN+Projection method is not guaranteed to converge to the reference solution. This also confirms that the variational loss (3.7) brings great difficulties to neural network training of the KKT system (4.6), unless \(c=1/\alpha\), in which case the control function \(u\) is canceled out inside the projection operator.
\[\mathbf{P}_{U_{ad}(\boldsymbol{\mu})}\left(u(\boldsymbol{\mu})-c\mathrm{d}_{u }J(y(\boldsymbol{\mu}),u(\boldsymbol{\mu});\boldsymbol{\mu})\right)=\mathbf{P }_{U_{ad}(\boldsymbol{\mu})}\left(u(\boldsymbol{\mu})-c(\alpha u(\boldsymbol {\mu})+p(\boldsymbol{\mu}))\right)=\mathbf{P}_{U_{ad}(\boldsymbol{\mu})}\left( -\frac{1}{\alpha}p(\boldsymbol{\mu})\right).\]
However, for the next non-smooth test problem, \(u\) cannot be separated from the variational loss for any \(c\), which results in failure for the PINN+Projection method. To demonstrate that AONN can get all-at-once solutions, we first take a \(100\times 100\) grid of \(\mathcal{P}\) and choose several different realizations of \(\boldsymbol{\mu}\) to solve their corresponding OCP using the dolfin-adjoint solver. Then the parameters on the grid together with spatial coordinates are run through the trained neural networks obtained by Algorithm 1 to get the optimal solutions of OCP(\(\boldsymbol{\mu}\)) all at once. It is worth noting that using the dolfin-adjoint solver to compute the optimal solutions for all parameters on the \(100\times 100\) grid is computationally expensive since \(10000\) simulations are required. So we only take \(16\) representative points on the grid for the dolfin-adjoint solver (It still takes several hours). Nevertheless, all-at-once solutions can be computed effectively and efficiently through our AONN framework. Figure 8 displays three quantities with respect to \(\mu_{1},\mu_{2}\), where Figure 8(a) shows the objective functional \(J\), Figure 8(b) is the accessibility of the desired state and Figure 8(c) displays the \(L_{2}\)-norm of the optimal control \(u\). The red dots in Figure 8 show the results obtained by the dolfin-adjoint solver, where \(16\) simulations of OCP with \((\mu_{1},\mu_{2})\in\{0.05,0.1833,0.3167,0.45\}\times\{0.5,1.1667,1.8333,2.5\}\) are performed. From Figure 8, it is clear that AONN can obtain accurate solutions.
### Test 5: Optimal control for the semilinear elliptic equations with sparsity parametrization
In this test problem, we again consider a control problem for the semilinear elliptic equations as that in (5.1). However, this time we consider a sparse optimal control problem with sparsity parametrization. The sparse solution in optimal control is often achieved by \(L_{1}\)-control cost [5, 6, 4] and its application to the controller placement problems is well studied [46]. Specifically, we consider the following objective functional with \(L_{1}\)-control cost:
\[J(y,u)=\frac{1}{2}\left\|y-y_{d}\right\|_{L_{2}}^{2}+\frac{\alpha}{2}\|u\|_{L _{2}}^{2}+\beta\|u\|_{L_{1}},\]
where the coefficient \(\beta\) of the \(L_{1}\)-term controls the sparsity of the control function \(u\). With the increase of \(\beta\), the optimal control gradually becomes sparse and eventually reaches zero. In order to make continuous observation of this phenomenon, we need to solve the following parametric optimal control problem by setting \(\beta\) as a variable
Figure 5: Test 4: (a) The parametric computational domain \(\Omega(\boldsymbol{\mu})\). (b) \(N=40000\) training collocation points sampled in \(\Omega_{\mathcal{P}}\) (there are no points inside the frustum).
Figure 5: Test 4: the solution obtained by the doffin-adjoint solver for a fixed parameter \(\mathbf{\mu}=(0.3,2.5)\), the approximate solutions of \(u\) obtained by AONN, PINN, PINN+Projection (with different \(c=100,1000,10000\)), and the absolute errors of the AONN solution and the PINN+Projection solution with \(c=\frac{1}{\alpha}=1000\).
Figure 6: Test 4: the AONN solutions \(u(\mathbf{\mu})\) with several realizations of \(\mathbf{\mu}=(\mu_{1},\mu_{2})\).
Figure 7: Test 4: the solution obtained by the doffin-adjoint solver for a fixed parameter \(\mathbf{\mu}=(0.3,2.5)\), the approximate solutions of \(u\) obtained by AONN, PINN, PINN+Projection (with different \(c=100,1000,10000\)), and the absolute errors of the AONN solution and the PINN+Projection solution with \(c=\frac{1}{\alpha}=1000\).
parameter \(\boldsymbol{\mu}\),
\[\left\{\begin{aligned} &\min_{y(\boldsymbol{\mu}),u(\boldsymbol{\mu})}J(y (\boldsymbol{\mu}),u(\boldsymbol{\mu});\boldsymbol{\mu}):=\frac{1}{2}\left\|y( \boldsymbol{\mu})-y_{d}\right\|_{L_{2}(\Omega)}^{2}+\frac{\alpha}{2}\|u( \boldsymbol{\mu})\|_{L_{2}(\Omega)}^{2}+\boldsymbol{\mu}\|u(\boldsymbol{\mu}) \|_{L_{1}(\Omega)},\\ &\text{subject to }\begin{cases}-\Delta y(\boldsymbol{\mu})+y( \boldsymbol{\mu})^{3}=u(\boldsymbol{\mu})&\text{ in }\Omega,\\ &y(\boldsymbol{\mu})=0&\text{ on }\partial\Omega,\\ \end{cases}\\ \text{and}\quad u_{a}\leq u(\boldsymbol{\mu})\leq u_{b}&\text{ a.e. in }\Omega.\end{aligned}\right. \tag{13}\]
We fixed the other parameters
\[\Omega=B(0,1),\] \[\alpha=0.002,u_{a}=-12,u_{b}=12,\] \[y_{d}=4\sin\left(2\pi x_{1}\right)\sin\left(\pi x_{2}\right) \exp(x_{1}),\]
and the range of parameter is set to \(\boldsymbol{\mu}\in[0,\boldsymbol{\mu}_{max}]\). The upper bound \(\boldsymbol{\mu}_{max}=0.128\) ensures that for any \(\boldsymbol{\mu}\geq\boldsymbol{\mu}_{max}\) the optimal control \(u^{*}(\boldsymbol{\mu})\) is identically zero. We compute the generalized derivative
\[\mathrm{d}_{u}J(y(\boldsymbol{\mu}),u(\boldsymbol{\mu});\boldsymbol{\mu})= \alpha u(\boldsymbol{\mu})+p(\boldsymbol{\mu})+\boldsymbol{\mu}\text{ sign}(u( \boldsymbol{\mu})). \tag{14}\]
where \(p\) is the solution of the adjoint equation as defined in (11), and sign is an element-wise operator that extracts the sign of a function.
Since the optimal control function varies for different \(\boldsymbol{\mu}\), solving a series of sparse optimal control problems is straightforward in general. For example, Eduardo Cases [4] calculated the optimal solutions for \(\boldsymbol{\mu}=2^{i}\times 10^{-3},i=0,1,\ldots,8\). Here, we use AONN to compute all the optimal solutions for any \(\boldsymbol{\mu}\in[0,0.128]\) all at once. The length factor function for the Dirichlet boundary condition is chosen as \(\ell(\mathbf{x})=1-x_{0}^{2}-x_{1}^{2}\). The neural networks \(\hat{y}_{I}\left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{y_{I}} \right),\hat{p}_{I}\left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{p_{ I}}\right)\) and \(\hat{u}\left(\mathbf{x}(\boldsymbol{\mu});\boldsymbol{\theta}_{u}\right)\) are trained by AONN, which have the same configurations to those in the previous test, except for the input dimension being \(3\), resulting in \(3\times 3376=10128\) undecided parameters. To this end, we sample \(N=20000\) points in the spatio-parametric space \(\Omega_{\mathcal{P}}=B(0,1)\times[0,\boldsymbol{\mu}_{max}]\) by a uniform distribution. In order to capture the information at the boundary of \(\boldsymbol{\mu}\), \(2000\) of these \(20000\) points are sampled in \(B(0,1)\times\{0\}\) and \(B(0,1)\times\{\boldsymbol{\mu}_{max}\}\). We take \(500\) iteration steps and gradually increase the training epochs with \(n_{\text{aug}}=100\) after every \(100\) iteration. As a result, training epochs for the state function and the adjoint function increase from \(200\) to \(600\) during training. The step size \(c^{k}\) starts with \(c^{0}=10\) and decreases by a factor \(\gamma=0.985\) after every iteration.
The optimal control for some representative \(\boldsymbol{\mu}\in[0,\boldsymbol{\mu}_{max}]\) computed by AONN are displayed in Figure 10. The AONN results are consistent with the results presented in ref.[4], where the sparsity of optimal control increases as \(\boldsymbol{\mu}\) increases. As shown in Figure 10, the initial optimal control for \(\boldsymbol{\mu}=0\) has eight peaks and each peak disappears as \(\boldsymbol{\mu}\) increases. To determine where it is most efficient to put the control device, one might require some manual tuning of \(\boldsymbol{\mu}\) and thus need to solve OCP many times for different \(\boldsymbol{\mu}\). Determining these optimal locations is easy if we have obtained the parametric solutions \(u^{*}(x,y,\boldsymbol{\mu})\), which is exactly what AONN does. The coordinates of the eight peaks are obtained by evaluating the last vanishing positions of \(u^{*}(x,y,\boldsymbol{\mu})\) as \(\boldsymbol{\mu}\) increases at a uniform \(100^{3}\) grid on \([-1,1]\times[-1,1]\times[0,\boldsymbol{\mu}_{max}]\). Figure 10 shows the variation of control values at the eight peaks as a function with respect to \(\boldsymbol{\mu}\). We observe that the control function values at points \(P_{1},P_{2},P_{3}\) start with \(12\), and begin to decrease after \(\boldsymbol{\mu}\) reaches a certain value, finally drop to zero. Value at \(P_{4}\) starts to decrease from a number less than \(12\) until it reaches zero. The behavior of points \(P_{5}\sim P_{8}\) is completely symmetric.
To conclude, with these five numerical tests, we examine the efficiency of AONN and compare its performance with PINN, PINN+Projection, and the traditional solver. The numerical results indicate that the proposed AONN method is more advantageous than the PINN+Projection method and the PINN method in solving parametric optimal control problems. The PINN method cannot obtain accurate solutions for complex constrained problems, and the PINN+projection method improves the accuracy of the PINN method in general but has limitations on nonsmooth problems such as the sparse optimal control problems, while the AONN method is a general framework performing better on different types of parametric optimal control problems.
## 6 Conclusions
We have developed AONN, an adjoint-oriented neural network method, for computing all-at-once solutions to parametric optimal control problems. That is, the optimal control solutions for arbitrary parameters can be obtained by solving only once. The key idea of AONN is to employ three neural networks to approximate the control function, the adjoint function, and the state function in the optimality conditions, which
allows this method to integrate the idea of the direct-adjoint looping (DAL) approach in neural network approximation. In this way, three parametric surrogate models using neural networks provide all-at-once representations of optimal solutions, which avoids mesh generation for both spatial and parametric spaces and thus can be generalized to high-dimensional problems. With the integration of DAL, AONN also avoids the penalty-based loss function of the complex Karush-Kuhn-Tucker (KKT) system, thereby reducing the training difficulty of neural networks and improving the accuracy of solutions. Numerical experiments have shown that AONN can solve parametric optimal control problems all at once with high accuracy in several application scenarios, including control parameters, physical parameters, model parameters, and geometrical parameters.
Many questions remain open, e.g., choosing the step size and the scaling factor are heuristic, and solving some complex problems requires a higher computational cost. Future works could include the analysis of the convergence rate to better understand the properties of AONN, the introduction of adaptive sampling strategies
Figure 10: Test 5: the AONN solution \(u(\boldsymbol{\mu})\) of eight fixed peaks \(P_{1}\sim P_{8}\) as a function respect to \(\boldsymbol{\mu}\). The legend on the right is the coordinates of the eight points.
to further improve both robustness and effectiveness, and the generalization and application of AONN to more challenging problems such as shape or topology optimizations.
## Acknowledgments
This study was funded in part by National Natural Science Foundation of China (#12131002) and China Postdoctoral Science Foundation (2022M711730).
|
2306.10185 | Spatial-SpinDrop: Spatial Dropout-based Binary Bayesian Neural Network
with Spintronics Implementation | Recently, machine learning systems have gained prominence in real-time,
critical decision-making domains, such as autonomous driving and industrial
automation. Their implementations should avoid overconfident predictions
through uncertainty estimation. Bayesian Neural Networks (BayNNs) are
principled methods for estimating predictive uncertainty. However, their
computational costs and power consumption hinder their widespread deployment in
edge AI. Utilizing Dropout as an approximation of the posterior distribution,
binarizing the parameters of BayNNs, and further to that implementing them in
spintronics-based computation-in-memory (CiM) hardware arrays provide can be a
viable solution. However, designing hardware Dropout modules for convolutional
neural network (CNN) topologies is challenging and expensive, as they may
require numerous Dropout modules and need to use spatial information to drop
certain elements. In this paper, we introduce MC-SpatialDropout, a spatial
dropout-based approximate BayNNs with spintronics emerging devices. Our method
utilizes the inherent stochasticity of spintronic devices for efficient
implementation of the spatial dropout module compared to existing
implementations. Furthermore, the number of dropout modules per network layer
is reduced by a factor of $9\times$ and energy consumption by a factor of
$94.11\times$, while still achieving comparable predictive performance and
uncertainty estimates compared to related works. | Soyed Tuhin Ahmed, Kamal Danouchi, Michael Hefenbrock, Guillaume Prenat, Lorena Anghel, Mehdi B. Tahoori | 2023-06-16T21:38:13Z | http://arxiv.org/abs/2306.10185v1 | Spatial-SpinDrop: Spatial Dropout-based Binary Bayesian Neural Network with Spintronics Implementation
###### Abstract
Recently, machine learning systems have gained prominence in real-time, critical decision-making domains, such as autonomous driving and industrial automation. Their implementations should avoid overconfident predictions through uncertainty estimation. Bayesian Neural Networks (BayNNs) are principled methods for estimating predictive uncertainty. However, their computational costs and power consumption hinder their widespread deployment in edge AI. Utilizing Dropout as an approximation of the posterior distribution, binarizing the parameters of BayNNs, and further to that implementing them in spintronics-based computation-in-memory (CIM) hardware arrays provide can be a viable solution. However, designing hardware Dropout modules for convolutional neural network (CNN) topologies is challenging and expensive, as they may require numerous Dropout modules and need to use spatial information to drop certain elements. In this paper, we introduce MC-SpatialDropout, a spatial dropout-based approximate BayNNs with spintronics emerging devices. Our method utilizes the inherent stochasticity of spintronic devices for efficient implementation of the spatial dropout module compared to existing implementations. Furthermore, the number of dropout modules per network layer is reduced by a factor of \(9\times\) and energy consumption by a factor of \(94.11\times\), while still achieving comparable predictive performance and uncertainty estimates compared to related works.
MC-Dropout, Spatial Dropout, Bayesian neural network, Uncertainty estimation, Spintronic
## I Introduction
Neural networks are brain-inspired computational methods that, in some cases, can even outperform human counterparts [1]. Consequently, applications of NNs have increased rapidly in recent years and have become the cornerstone of modern computing paradigms. Furthermore, NNs are commonly deployed in real-time safety-critical tasks such as computer-aided medical diagnostics, industrial robotics, and autonomous vehicles.
Conventional (point estimate) Neural Networks (NNs) typically learn a single point value for each parameter. However, they do not account for the uncertainty in the data nor in the model, leading to overconfident predictions and in turn to safety violations. This is particularly true when the data generation process is noisy or the training data is either incomplete or insufficient to capture the complexity of the actual phenomenon being modelled. In safety-critical domains where machine learning systems make human-centered decisions, an uncertainty measure is essential for informed decision-making.
On the other hand, Bayesian Neural Networks (BayNNs), which put prior distributions over the model parameters and learn the posterior distribution using approximation techniques (e.g., Monte Carlo (MC)-Dropout [2]), present a systematic method for training uncertainty-aware neural networks. However, the computational costs and high-performance requirements of BayNNs can be prohibitive for edge devices.
Therefore, dedicated NN hardware accelerators such as Compute-in-Memory (CiM) architectures with emerging Non-Volatile resistive Memories (NVMs) have been explored. CiM architectures enable the Matrix-Vector Multiplication (MVM) operation of NNs to be carried out directly inside the memory, overcoming the memory limitations of traditional von-Neumann architectures. Among the NVM technologies, SpinTransfer-Torque Magnetic Random Access Memory (STT-MRAM) is particularly appealing due to its nanosecond latency, high endurance (\(10^{12}\) cycles), and low switching energy (\(10\) fJ) [3].
Additionally, algorithmic approaches, such as Binarization which typically reduces the bit precision of NNs to \(1\)-bit, lead to smaller computational time and model size. Therefore, they are an attractive options for BayNNs to mitigate their inherent costs. Moreover, this approach allows for the direct mapping of BayNN parameters to STT-MRAM-based CiM hardware.
Existing work [4, 5] proposed to binarize the parameters of BayNNs and implement them on STT-MRAM-based CiM hardware resulting in a highly efficient solution. Although this approach can achieve high algorithmic performance and hardware efficiency compared to existing works, designing Dropout modules in the case of convolutional NN (CNN) topologies is challenging and expensive due to the nature of implementation.
In this paper, we present an algorithm-hardware co-design approach that not only solves the challenges of implementing the Dropout-based BayNNs approach, but also reduces the number of Dropout modules required per layer. The main contributions of this paper are as follows:
* We propose _MC-SpatialDropout_, which uses spatial Dropout for Bayesian approximation. Our method is mathematically equivalent to the MC-Dropout-based approach, enabling uncertainty-aware predictions.
* We present an STT-MRAM-based CiM architecture for the proposed _MC-SpatialDropout_-based BayNNs. Our approach leverages the inherent stochasticity of STT-MRAM for the Dropout module and deterministic behavior for parameter storage. This allows the reuse of the array designed for conventional binary NNs (BNNs), and only the peripheral circuitry is adapted for Bayesian inference.
* We also propose reliable and adaptable sensing scheme for stochastic STT-MRAM specifically designed to implement the dropout concept for both linear and convolutional layers.
Our method is targeting CNN topologies and reduces the number of Dropout modules in a layer by \(9\times\) and energy consumption by \(94.11\times\), while maintaining comparable predictive performance and uncertainty estimates.
The remainder of this paper is organized as follows: Section II provides the background for our work, Section III describes the proposed MC-SpatialDropout, Section IV presents both the algorithmic and hardware results for our approach and finally, in Section V, we conclude the paper.
## II Background
### _Spintronics_
MRAM have gained significant attention due to their fast switching, high endurance, and CMOS compatibility [6]. The main component of MRAM devices is the Magnetic Tunnel Junction (MTJ), which comprises two ferromagnetic layers: the reference layer and the free layer, separated by a thin insulating layer. The magnetization of the reference layer is fixed in one direction, while the free layer can have its magnetization reversed between two stable positions: parallel or antiparallel to that of the reference layer. The resistance of the stack depends on the relative orientations of the layer magnetizations, with a high resistance state in the antiparallel configuration and a low resistance state in the parallel configuration.
### _Uncertainty in Deep Learning_
Uncertainty estimation is vital in deep learning, especially for safety-critical applications, as it provides insight into the model's confidence in its predictions, enhancing the trustworthiness of decision-making. There are two main types of uncertainty: epistemic, which results from the limitations of the model and can be reduced with more data or improved architectures, and aleatoric, which arises from noise in the data and cannot be mitigated. Obtaining uncertainty estimates bolsters robustness by identifying out-of-distribution (OOD) data points and avoiding overconfident predictions. OOD data refers to data whose distribution is completely different from the training (in-distribution (ID)) data. In this paper, we focus on aleatoric uncertainty estimation and evaluate the effectiveness of our method for OOD detection.
### _Bayesian NNs_
BayNNs offer a principled approach to uncertainty estimation in neural networks. Several approximation methods exist for BayNNs, such as variational inference and Markov Chain Monte Carlo methods.
One popular approximation technique is Monte Carlo Dropout (MC-Dropout), which leverages dropout for Bayesian inference. Dropout [7] is a common regularization technique used to reduce overfitting and neuron co-adaptation by randomly setting neuron outputs to zero during training. The dropout operation can be described as \(\hat{\mathbf{Z}}=\mathbf{M}\odot\mathbf{Z}\), where \(\mathbf{M}\) is a binary mask generated by sampling from a Bernoulli distribution, \(\odot\) represents element-wise multiplication, and \(\hat{\mathbf{Z}}\) and \(\hat{\mathbf{Z}}\) are intermediate activation and dropped out intermediate activation of a layer, respectively.
MC-Dropout provides an approximation of the true posterior distribution with relatively low computational and memory overhead compared to other methods such as variational inference (VI) [8] and the ensemble approach [9]. This is because the ensemble approach requires inference in multiple NNs, and VI requires learning the parameters of the variational distribution, which require storage. Since the MC-Dropout method has the same number of parameters as conventional NNs, it leads to minimal additional computation and memory requirements, making it suitable for a wide range of applications, including those with limited resources.
The optimization objective for MC-Dropout can be represented as
\[\mathcal{L}(\boldsymbol{\theta})_{\text{MC-Dropout}}=\mathcal{L}(\boldsymbol{ \theta},\mathcal{D})+\lambda\sum_{l=1}^{L}||\boldsymbol{\theta}_{l}||_{2}^{2} \tag{1}\]
where \(\mathcal{L}(\boldsymbol{\theta},\mathcal{D})\) represents the task-specific loss function, such as categorical cross-entropy for classification or mean squared error for regression, and \(||\boldsymbol{\theta}||_{2}^{2}\) is the regularization term. Also, \(\boldsymbol{\theta}\) summarizes all learnable parameters, i.e., \(\boldsymbol{\theta}=\{\mathbf{W}_{l},\mathbf{b}_{l}\mid l=1,\cdots,L\}\), where \(\mathbf{W}_{l}\) denote the weight matrices and \(\mathbf{b}_{l}\) the biases for the layer \(l\). During inference, dropout is applied multiple times, and the outputs are averaged to obtain the predictive distribution. Hence, the posterior predictive distribution over the output \(y\), i.e.,
\[p(\mathbf{y}|\mathbf{x},\mathcal{D})=\int p(\mathbf{y}|\mathbf{x},\boldsymbol{ \theta})p(\boldsymbol{\theta}|\mathcal{D})d\boldsymbol{\theta} \tag{2}\]
is approximated by
\[p(\mathbf{y}|\mathbf{x},\mathcal{D})\approx\frac{1}{T}\sum_{t=1}^{T}p(\mathbf{ y}|\mathbf{x},\boldsymbol{\theta},\mathbf{M}_{t})\quad\text{with}\quad\mathbf{M}_{t} \sim\mathcal{B}(\rho). \tag{3}\]
Here, \(\mathcal{D}\) denotes the dataset, \(\mathbf{x}\) is the input, \(\mathbf{y}\) is the output, and the entries of \(\mathbf{M}_{t}\) are independently sampled from a Bernoulli distribution with (dropout) probability \(\rho\).
### _Mapping of Convolutional Layers to CiM Architecture_
To perform the computation inside the CiM architecture, a critical step is the mapping of the different layers of the NN to crossbar arrays. Standard NNs contain mainly Fully Connected (FC) layers and convolutional layers. While the mapping of FC layers is straightforward in a crossbar array as the shape of the weight matrices is 2D (\(\mathbb{R}^{m\times n}\)), mapping convolutional layers is challenging due to their 4D shapes (\(\mathbb{R}^{K\times K\times C_{in}\times C_{out}}\)). Here, \(K\) denotes the shape of kernels, and \(C_{in}\) represents the number of input channels. Implementing convolutional layers requires implementing multiple kernels with different shapes and sizes.
There are two popular mapping strategies for mapping the convolutional layer exists. In the mapping strategy 1, each kernel of shape \(K\times K\times C_{in}\) is unrolled to a column of the crossbar [10]. On the other hand, in the mapping strategy 2, each kernel is mapped to \(K\times K\) smaller crossbars with a shape of \(C_{in}\times C_{out}\)[11].
## III Proposed Method
### _Problem Statement and Motivation_
The convolution operation is performed differently in CiM architectures compared to GPUs. In CiM architectures, moving windows (MWs) with a shape of \(K\times K\) are applied to each input feature map (IFM) in one cycle (see Fig. 1(a)). In the next cycle, the MWs will "slide over" the IFMs with a topology-defined stride \(S\) for \(N\) cycles. Assuming \(K>S\), some of the elements in the MWs for the next \(K-S\) cycles will be the same as in the previous cycles, a concept known as
Fig. 1: a) Input feature map of a convolutional layer, b) moving windows from all the input feature maps are flattened for the conventional mapping, c)
weight sharing. This is illustrated by the green input feature (IF) in Fig. 1(a).
The Dropout module designed in [4, 5] drops each element of the MWs with a probability \(P\) in each cycle. Therefore, it essentially re-samples the dropout mask of each MW of IFMs in each cycle. Consequently, the dropout masks of the shared elements in the MWs will change in each input cycle, leading to inconsistency. An ideal Dropout module should only generate dropout masks for new elements of the MWs. Designing a Dropout module that drops each element of the MWs depending on the spatial location of the MWs in the IFMs is challenging and may lead to complex circuit design. Additionally, the number of rows in crossbars typically increases from one layer to another due to the larger \(C_{in}\). Consequently, the number of Dropout modules required will be significantly higher.
Furthermore, the MWs are reshaped depending on the weight mapping discussed in Section II-D. For mapping strategy 1, the MWs from IFMs are flattened into a vector of length \(K\times K\times C_{in}\). However, for mapping strategy 2, IFMs are flattened into \(K\times K\) vectors of length \(C_{in}\), as depicted in Fig. 1(a) and (b). As a result, designing a generalizable Dropout model is challenging.
### _MC-SpatialDropout as Bayesian Approximation_
In an effort to improve the efficiency and accuracy of Bayesian approximation techniques, we propose the MC-SpatialDropout method. The proposed MC-SpatialDropout technique expands upon the MC-Dropout [2] and MC-SpinDrop [4, 5] methods by utilizing spatial dropout as a Bayesian approximation. Our approach drops an entire feature with a probability \(p\). This means that all the elements of a feature map in Fig. 1(a) are dropped together. However, each feature map is dropped independently of the others. As a result, the number of Dropout modules required for a layer will be significantly reduced, and the design effort of the dropout module will also be lessened.
The primary objective of this approach is to address the shortcomings of MC-Dropout arising from its independent treatment of elements of the features. In contrast, MC-SpatialDropout exploits the spatial correlation of IFs, which is particularly advantageous for tasks involving image or spatial data. By doing so, it facilitates a more robust and contextually accurate approximation of the posterior distribution. This enables the model to capture more sophisticated representations and account for dependencies between features.
In terms of the objective function for the MC-SpatialDropout, Soyed et al. [4, 5] showed that minimizing the objective function of MC-Dropout (see Equation (1)) is not beneficial for BNNs and suggested a BNN-specific regularization term. In this paper, instead of defining a separate loss function for MC-SpatialDropout, we define the objective function as:
\[\mathcal{L}(\mathbf{\theta})_{\text{MC-SpatialDropout}}=\mathcal{L}(\mathbf{\theta}, \mathcal{D})+\lambda\sum_{l=1}^{L}||\mathbf{W}_{l}||_{2}^{2}. \tag{4}\]
Therefore, the objective function is equivalent to Equation (1) for MC-Dropout. However, the second part of the objective function is the regularization term applied to the (real valued) "proxy" weights (\(\mathbf{W}_{l}\)) of BNN instead of binary weights. It encourages \(\mathbf{W}_{l}\) to be close to zero. By keeping a small value for the \(\lambda\), it implicitly ensures that the distribution of weights is centered around zero. Also, we normalize the weights by
\[\mathbf{\hat{W}}_{l}=\frac{\mathbf{W}_{l}-\mu_{l}^{\mathbf{W}}}{\sigma_{ \mathbf{W}}^{\mathbf{W}}}, \tag{5}\]
to ensure, the weight matrix has zero mean and unit variance before binarization. Where \(\mu^{\mathbf{W}}\) and \(\sigma^{\mathbf{W}}\) are the mean and variance of the weight matrix of the layer \(l\). This process allows applying L2 regularization in BNN training and [12] showed that it improves inference accuracy by reducing quantization error. Since our work is targeted for BNN, regularization is only applied to the weight matrixes.
The difference is that our method approximate Equation (2) by:
\[p(\mathbf{y}|\mathbf{x},\mathcal{D})\approx\frac{1}{T}\sum_{t=1}^{T}p(\mathbf{ y}|\mathbf{x},\mathbf{\theta},\mathbf{\hat{M}}_{t})\quad\text{with}\quad\mathbf{\hat{M}}_{t }\sim\mathcal{B}(\rho). \tag{6}\]
Here, during training and Bayesian inference, the dropout mask \(\mathbf{\hat{M}}_{t}\) sampled spatially correlated manner for the output feature maps (OFMs) of each layer from a Bernoulli distribution with (dropout) probability \(\rho\). The dropout masks correspond to whether a certain spatial location in the OFMs (i.e., a certain unit) is dropped or not.
For Bayesian inference, we perform \(T\) Monte Carlo sampling to approximate the posterior distribution. Each Monte Carlo sample corresponds to forward passing the input \(\mathbf{x}\) through the NN with unique spatial dropout masks \(\mathbf{\hat{M}}_{t}\quad t=1,\cdots,T\), resulting in a diverse ensemble of networks. By averaging the predictions from the Monte Carlo samples, we effectively perform Bayesian model averaging to obtain the final prediction.
Proper arrangement of layers is important for the MC-SpatialDropout based Bayesian inference. The Spatial Dropout layer can be applied before each convolutional layer in a layerwise MC-SpatialDropout method. Additionally, the SpatialDropout layer can be applied to the extracted features of a CNN topology in a topology-wise MC-SpatialDropout method. Fig. 2 shows the block diagram for both approaches.
### _Designing Spatial-SpinDrop Module_
As mentioned earlier, in the proposed MC-SpatialDropout, feature maps can independently be dropped with a probability \(p\). Due to the nature of input application in CiM architectures, this implicitly means dropping different regions of crossbars depending on the mapping strategy. These challenges are associated with designing the Dropout module for the proposed MC-SpatialDropout based BayNN.
For the mapping strategy 1, as depicted in Fig. 1(b), each \(K\times K\) subset of input comes from a feature map. This means that if an input feature is dropped, the corresponding \(K\times K\) subset of input should also be dropped for all \(C_{out}\) and \(N\) cycles of inputs. This implies that dropping each \(K\times K\) row of a crossbar together for \(N\) cycles is equivalent to applying spatial dropout. However, each group of rows should be dropped independently of one another. Additionally, their dropout mask should be sampled only in the first cycle. For the remaining \(N-1\) cycles of input, the dropout mask should remain consistent.
In contrast, in the mapping strategy 2 (see Fig. 1(c)), the elements of a MW are applied in parallel to each \(K\times K\) crossbar at the same index. As a result, dropping an IF would lead to dropping each index of rows in all the \(K\times K\) crossbars together. Similarly, each row of a crossbar is dropped independently of one another, and the dropout mask is sampled
Fig. 2: Block diagram of the location of the proposed MC-SpatialDropout in a) a layer-wise fashion, b) a topology specific fashion.
at the first input cycle and remains consistent for the remaining \(N-1\) cycles of input.
Furthermore, if the spatial dropout is applied to the extracted feature maps of a CNN (see Fig. 2), then depending on the usage of the adaptive average pool layer, the design of the Spin-SpatialDrop will differ. If a CNN topology does not use an adaptive average pool layer, then \(H\times W\) groups of rows are dropped together. This is because the flattening operation essentially flattens each IF into a vector. These vectors are combined into a larger vector representing the input for the classifier layer. However, since input for the FC layer is applied in one cycle only, there is no need to hold the dropout mask. The Spin-SpatialDrop module for the mapping strategy 1 can be adjusted for this condition.
Lastly, if a CNN topology does use an adaptive average pool layer, then the SpinDrop module proposed by [4, 5] can be used. This is because the adaptive average pool layer averages each IF to a single point, giving a vector with total \(C_{out}\) elements.
Therefore, the Dropout module for the proposed MC-SpatialDropout should be able to work in four different configurations. Consequently, we propose a novel spintronic-based spatial Dropout design, called _Spatial-SpinDrop_.
The Spatial-SpinDrop module leverages the stochastic behavior of the MTJ for spatial dropout. The proposed scheme is depicted in Fig. 3. In order to generate a stochastic bitstream using the MTJ, the first step involves a writing scheme that enables the generation of a bidirectional current through the device. This writing circuit consists of four transistors, allocated to a "SET" and a "RESET" modules. The "SET" operation facilitates the stochastic writing of the MTJ, with a probability corresponding to the required dropout probability. On the other hand, the "RESET" operation restores the MTJ to its original state. During the reading operation of the MTJ, the resistance of the device is compared to a reference element to determine its state. The reference resistance value is chosen such as it falls between the parallel and anti-parallel resistances of the MTJ.
For the reading phase, a two-stage architecture is employed for better flexibility and better control of the reading phase for the different configurations discussed earlier. The module operates as follows: after a writing step in the MTJ, the signal \(V_{pol}\) allows a small current to flow through the MTJ and the reference cell (_REF_), if and only if the signal \(hold\) is activated. Thus, the difference in resistance is translated into a difference in voltages (\(V_{MTJ}\) and \(V_{ref}\)). The second stage of the amplifier utilizes a StrongARM latch structure [13] to provide a digital representation of the MTJ state. The _Ctrl_ signal works in two phases. When _Ctrl = 0_, \(\overline{Out}\) and \(Out\) are precharged at _VDD_. Later, when _Ctrl = 1_, the discharge begins, resulting in a differential current proportional to the gate voltages (\(V_{MTJ}\) and \(V_{ref}\)). The latch converts the difference of voltage into two opposite logic states in \(\overline{Out}\) and \(Out\). Once the information from the MTJ is captured and available at the output, the signal \(hold\) is deactivated to anticipate the next writing operation. To enable the dropout, a series of AND gates and transmission gates are added, allowing either access to the classical decoder or to the stochastic word-line (WL).
As long as the \(hold\) signal is deactivated, no further reading operation is permitted. Such a mechanism allows the structure to maintain the same dropout configuration for a given time and will be used during \(N-1\) cycles of inputs to allow the dropping of the IF in strategies 1 and 2. In the first strategy, the AND gate receives as input \(K\times K\) WLs from the same decoder, see Fig. 4(a). While in strategy 2, the AND gate receives one row per decoder, as presented in Fig. 4(b).
For the last two configurations, the \(hold\) signal is activated for each reading operation, eliminating the need to maintain the dropout mask for \(N-1\) cycles.
### _MC-SpatialDropout-Based Bayesian Inference in CiM_
The proposed MC-SpatialDropout-Based Bayesian inference can be leveraged on the two mapping strategies discussed in Section II-D. In both strategies, one or more crossbar arrays with MTJs at each crosspoints are employed in order to encode the binary weights into the resistive states of the MTJs.
Fig. 4: Crossbar design for the MC-SpatialDropout based on mapping strategy (a) 1 and (b) strategy2. In (b), only the Dropout module and WL decoder are shown, Everything else is abstracted.
Fig. 3: (a) Writing and (b) reading schemes for the MTJ.
Specifically, for the mapping strategy 1, we divide the WLs of the crossbar into \(K\times K\) groups and connect one dropout module to each group, as shown in Fig.4(a). In Fig. 3(b), this strategy involves connecting \(K\times K\) WLs to an AND gate. The AND gate receives the signal delivered by the decoder as its input. This configuration allows for the selective activation or deactivation of a group of WLs. To facilitate the activation of multiple consecutive addresses in the array, an adapted WL decoder is utilized. The bit-line and source-line drivers were used to manage the analog input and output for the MVM operation. Also, a group-wise selection of WLs is performed concurrently and the intermediate result for MVM operation is accumulated into an accumulator block until all the WLs are selected for each layer. We utilized MUXes to select the different bit-lines that are sensed and converted by ADC. The shift-adder modules are used to shift and accumulate the partial sums coming from the array. Finally, a digital comparator and averaging block are used to implement the activation function. For the last layer, the average operation is performed with an averaging block.
For the mapping strategy 2, a similar architecture to strategy 1 is employed. The key distinction relies upon the utilization of \(K\times K\) crossbars in parallel to map the binary weights of a layer. Also, the dropout modules are connected to a similar WL index in each of the crossbar arrays, as shown in Fig. 4(b). Here, the same AND gate in the Dropout module receives signals from different decoders and the result is sent to each row of the \(K\times K\) crossbars. For instance, the first WL of each crossbar of a layer connects the same Dropout module. All the WLs decoders are connected to a dropout block in gray in Fig. 4(b) comprising \(C_{in}\) dropout modules. It is worth mentioning that the dropout is used during the reading phase only, therefore, the dropout module is deactivated during the writing operation and WL decoders are used normally.
## IV Results
### _Simulation Setup_
We evaluated the proposed MC-SpatialDropout on predictive performance using VGG, ResNet-18, and ResNet-20 topologies on the CIFAR-10 dataset. All the models were trained with SGD optimization algorithm, minimizing the proposed learning objective (4) with \(\lambda\) chosen between \(1\times 10^{-5}\) and \(1\times 10^{-7}\), and the binarization algorithm from [12] was used. Also, all the models are trained with \(\rho=15\%\) dropout probability. The validation dataset of the CIFAR-10 is split 80:20 with 20% of the data used for the cross-validation and 80% used for evaluation.
To assess the effectiveness of our method in handling uncertainty, we generated six additional OOD datasets: 1) Gaussian noise (\(\hat{\mathcal{D}}_{1}\)): Each pixel of the image is generated by sampling random noise from a unit Gaussian distribution, \(\mathbf{x}\sim\mathcal{N}(0,1)\), 2) Uniform noise (\(\hat{\mathcal{D}}_{2}\)): Each pixel of the image is generated by sampling random noise from a uniform distribution, \(\mathbf{x}\sim\mathcal{U}(0,1)\), 3) CIFAR-10 with Gaussian noise (\(\hat{\mathcal{D}}_{3}\)): Each pixel of the CIFAR-10 images is corrupted with Gaussian noise, 4) CIFAR-10 with uniform noise (\(\hat{\mathcal{D}}_{4}\)): Each pixel of the CIFAR-10 images is corrupted with uniform noise, 5) SVHN: Google street view house numbers dataset, and 6) STL10: a dataset containing images from the popular ImageNet dataset. Each of these OOD datasets contains \(8000\) images, and the images have the same dimensions as the original CIFAR-10 dataset (\(32\times 32\) pixels). During the evaluation phase, an input is classified as OOD or ID as follows:
\[\begin{cases}\text{OOD},&\text{if }\max\left(\mathcal{Q}\left(\frac{1}{T}\sum_{t =1}^{T}\mathbf{y}_{t}\right)\right)<0.9\\ \text{ID},&\text{otherwise}.\end{cases} \tag{7}\]
Here, \(\mathbf{y}_{t}\) is the softmax output of the stochastic forward pass at MC run \(t\) with \(T\) MC runs, the function \(\mathcal{Q}(\cdot)\) calculates the 10th percentile across a set of values, and the function \(\max(\cdot)\) determines the maximum confidence score across output classes. Overall, OOD or ID is determined by whether the maximum value from the 10th percentile of the averaged outputs is less than 0.9 (for OOD) or not (for ID). The intuition behind our OOD detection is that the majority of confidence score of the \(T\) MC runs is expected to be high and close to one another (low variance) for ID data and vice versa for OOD data.
The hardware-level simulations for the proposed method were conducted on the Cadence Virtuoso simulator with 28nm-FDSOI STMicroelectronics technology library for the respective network topologies and dataset configurations.
### _Predictive Performance and Uncertainty Estimation_
The predictive performance of the approach is close to the existing conventional BNNs, as shown in Table I. Furthermore, in comparison to Bayesian approaches [4, 5], our proposed approach is within \(1\%\) accuracy. Furthermore, the application of Spatial-SpinDrop before the convolutional layer and at the extracted feature maps can also achieve comparable performance (\(\sim 0.2\%\)), see Fig. 2. This demonstrates the capability of the proposed approach in achieving high predictive performance. However, note that applying Spatial-SpinDrop before all the convolutional layers can reduce the performance drastically, e.g., accuracy reduces to \(75\%\) on VGG. This is because at shallower layers, the number of OFMs is lower in comparison, leading to a high chance that most of the OFMs are being omitted (dropped). Also, as shown by [4, 5], BNNs are more sensitive to the dropout rate. Therefore, a lower Dropout probability between \(10-20\%\) is suggested.
In terms of OOD detection, our proposed method can achieve up to 100% OOD detection rate across various model architectures and six different OOD datasets (\(\hat{\mathcal{D}}_{1}\) through \(\hat{\mathcal{D}}_{6}\)), as depicted in Table II. There are some variations across different architectures and OOD datasets. However, even in these cases, our method can consistently achieve a high OOD detection rate, with the lowest detection rate being \(64.39\%\) on the ResNet-18 model with \(\hat{\mathcal{D}}_{4}\) dataset and Spatial-SpinDrop applied to extracted feature maps. However, when the Spatial-SpinDrop is applied to the convolutional layers of the last residual block, OOD detection rate on \(\hat{\mathcal{D}}_{4}\) dataset improved to \(97.39\%\), a \(33.00\%\) improvement. Therefore, we suggest applying the Spatial-SpinDrop to the last convolutional layers to achieve a higher OOD detection rate at the cost of a small accuracy reduction. Consequently, the result suggests
that the MC-SpatialDropout method is a robust and reliable approach to OOD detection across various model architectures and datasets.
### _Overhead Analysis_
The proposed Spatial-SpinDrop modules were evaluated for the area, power consumption, and latency as shown in Table III and compared with the SpinDrop approach presented in [4, 5]. These evaluations were conducted using a crossbar array with dimensions of \(64\times 32\) and scaled for the VGG topology. In layer-wise application of spatial Dropout, the Dropout modules applied to convolutional layers of the last VGG block. Also, for topology-wise application of spatial Dropout, Dropout modules are applied to the extracted feature maps. In our evaluation, a configuration of \(C_{in}=256\), \(K=3\) and \(C_{out}=512\) is used.
At first, in terms of area, the SpinDrop method requires one dropout module per row in the crossbar structure, while our method only requires one dropout module per \(K\times K\) group of rows. Therefore, the area and the power consumption of dropout modules are reduced by a factor of 9. In terms of latency for the dropout modules, we achieve \(15ns\) in all cases. Indeed, to generate 1 bit, for a given number of rows, the dropout module needs to be written, however, such latency can be further decreased by increasing the writing voltages of the MTJ. Furthermore, in the case, the adaptive average pool layer is not used, the power consumption and the area for the SpinDrop approach increases greatly (\(\times 9\)). While in the proposed approach, the adaptative Average pool layer does not impact the total energy and area, as mentioned in Section III-C and shown in Table III.
Table IV compares the energy consumption of the proposed approach with the State-Of-The-Art implementation based on the MNIST dataset. For the evaluation, we used NVSIM, and we estimated the total energy for a LeNet-5 architecture to be consistent with the approach presented in [4]. When compared to the SpinDrop approach in [4] our approach is \(2.94\times\) more energy efficient. Furthermore, when compared to RRAM technology, our solution is \(13.67\times\) more efficient. Finally, in a comparison with classic FPGA implementation, the proposed approach achieves substantial energy savings of up to \(94.11\times\).
## V Conclusion
In this paper, we present MC-SpatialDropout, an efficient spatial dropout-based approximation for Bayesian neural networks. The proposed method exploits the probabilistic nature of spintronic technology to enable Bayesian inference. Implemented on a spintronic-based Computation-in-Memory fabric with STT-MRAM, MC-SpatialDropout achieves improved computational efficiency and power consumption.
## Acknowledgments
This work was supported by a joint ANR-DFG grant Neuspin Project ANR-21-FAI1-0008.
|
2303.16458 | When to Pre-Train Graph Neural Networks? From Data Generation
Perspective! | In recent years, graph pre-training has gained significant attention,
focusing on acquiring transferable knowledge from unlabeled graph data to
improve downstream performance. Despite these recent endeavors, the problem of
negative transfer remains a major concern when utilizing graph pre-trained
models to downstream tasks. Previous studies made great efforts on the issue of
what to pre-train and how to pre-train by designing a variety of graph
pre-training and fine-tuning strategies. However, there are cases where even
the most advanced "pre-train and fine-tune" paradigms fail to yield distinct
benefits. This paper introduces a generic framework W2PGNN to answer the
crucial question of when to pre-train (i.e., in what situations could we take
advantage of graph pre-training) before performing effortful pre-training or
fine-tuning. We start from a new perspective to explore the complex generative
mechanisms from the pre-training data to downstream data. In particular, W2PGNN
first fits the pre-training data into graphon bases, each element of graphon
basis (i.e., a graphon) identifies a fundamental transferable pattern shared by
a collection of pre-training graphs. All convex combinations of graphon bases
give rise to a generator space, from which graphs generated form the solution
space for those downstream data that can benefit from pre-training. In this
manner, the feasibility of pre-training can be quantified as the generation
probability of the downstream data from any generator in the generator space.
W2PGNN offers three broad applications: providing the application scope of
graph pre-trained models, quantifying the feasibility of pre-training, and
assistance in selecting pre-training data to enhance downstream performance. We
provide a theoretically sound solution for the first application and extensive
empirical justifications for the latter two applications. | Yuxuan Cao, Jiarong Xu, Carl Yang, Jiaan Wang, Yunchao Zhang, Chunping Wang, Lei Chen, Yang Yang | 2023-03-29T05:05:02Z | http://arxiv.org/abs/2303.16458v4 | # When to Pre-Train Graph Neural Networks?
###### Abstract.
In recent years, graph pre-training has gained significant attention, focusing on acquiring transferable knowledge from unlabeled graph data to improve downstream performance. Despite these recent endeavors, the problem of negative transfer remains a major concern when utilizing graph pre-trained models to downstream tasks. Previous studies made great efforts on the issue of _what to pre-train_ and _how to pre-train_ by designing a variety of graph pre-training and fine-tuning strategies. However, there are cases where even the most advanced "pre-train and fine-tune" paradigms fail to yield distinct benefits. This paper introduces a generic framework W2PGNN to answer the crucial question of _when to pre-train_ (_i.e._, in what situations could we take advantage of graph pre-training) before performing effortful pre-training or fine-tuning. We start from a new perspective to explore the complex generative mechanisms from the pre-training data to downstream data. In particular, W2PGNN first fits the pre-training data into graphon bases, each element of graphon basis (_i.e._, a graphon) identifies a fundamental transferable pattern shared by a collection of pre-training graphs. All convex combinations of graphon bases give rise to a generator space, from which graphs generated form the solution space for those downstream data that can benefit from pre-training. In this manner, the feasibility of pre-training can be quantified as the generation probability of the downstream data from any generator in the generator space. W2PGNN offers three broad applications: providing the application scope of graph pre-trained models, quantifying the feasibility of pre-training, and assistance in selecting pre-training data to enhance downstream performance. We provide a theoretically sound solution for the first application and extensive empirical justifications for the latter two applications.
graph neural networks, graph pre-training +
Footnote †: dagger}\)This work was done when the first author was a visiting student at Fudan University.Zhejiang, August 6-10, 2023, _Jong Beach, CA_
+
Footnote †: dagger}\)This work was done when the first author was a visiting student at Fudan University.
molecular networks (unstable vs. stable in terms of chemical property) from those in social networks (stable vs. unstable in terms of social relationship); such distinct or reversed semantics does not contribute to transferability, and even exacerbates the problem of negative transfer.
To avoid the negative transfer, recent efforts focus on _what to pre-train_ and _how to pre-train_, _i.e._, design/adopt graph pre-training models with a variety of self-supervised tasks to capture different patterns (Zhu et al., 2019; Zhang et al., 2020; Zhang et al., 2021) and fine-tuning strategies to enhance downstream performance (Zhu et al., 2019; Li et al., 2020; Zhang et al., 2021; Zhang et al., 2021). However, there do exist some cases that no matter how advanced the pre-training/fine-tuning method is, the transferability from pre-training data to downstream data still cannot be guaranteed. This is because the underlying assumption of deep learning models is that the test data should share a similar distribution as the training data. Therefore, it is a necessity to understand _when to pre-train_, _i.e._, under what situations the "graph pre-train and fine-tune" paradigm should be adopted.
Towards the answer of when to pre-train GNNs, one straightforward way illustrated in Figure 1(a) is to train and evaluate on all candidates of pre-training models and fine-tuning strategies, and then the resulting best downstream performance would tell us whether pre-training is a sensible choice. If there exist \(l_{1}\) pre-training models and \(l_{2}\) fine-tuning strategies, such a process would be very costly as you should make \(l_{1}\times l_{2}\) "pre-train and fine-tune" attempts. Another approach is to utilize graph metrics to measure the similarity between pre-training and downstream data, _e.g._, density, clustering coefficient and etc. However, it is a daunting task to enumerate all hand-engineered graph features or find the dominant features that influenced similarity. Moreover, the graph metrics only measure the pair-wise similarity between two graphs, which cannot be directly and accurately applied to the practical scenario where pre-training data contains multiple graphs.
In this paper, we propose a W2PGNN framework to answer _when to pre-train GNNs from a graph data generation perspective_. The high-level idea is that instead of performing effortful graph pre-training/fine-tuning or making comparisons between the pre-training and downstream data, we study the complex generative mechanism from the pre-training data to the downstream data (Figure 1(b)). We say that downstream data can benefit from pre-training data (_i.e._, has high feasibility of performing pre-training), if it can be generated with high probability by a graph generator that summarizes the topological characteristic of pre-training data.
The major challenge is how to obtain an appropriate graph generator, hoping that it not only inherits the transferable topological patterns of the pre-training data, but also is endowed with the ability to generate feasible downstream graphs. To tackle the challenge, we propose to design a graph generator based on graphons. We first fit the pre-training graphs into different graphons to construct a _graphon basis_, where each graphon (_i.e._, element of the graphon basis) identifies a collection of graphs that share common transferable patterns. We then define a _graph generator_ as a convex combination of elements in a graphon basis, which serves as a comprehensive and representative summary of pre-training data. All of these possible generators constitute the _generator space_, from which graphs generated form the solution space for the downstream data that can benefit from pre-training.
Accordingly, the feasibility of performing pre-training can be measured as the highest probability of downstream data being generated from any graph generator in the generator space, which can be formulated as an optimization problem. However, this problem is still difficult to solve due to the large search space of graphon basis. We propose to reduce the search space to three candidates of graphon basis, _i.e._, topological graphon basis, domain graphon basis, and integrated graphon basis, to mimic different generation mechanisms from pre-training to downstream data. Built upon the reduced search space, the feasibility can be approximated efficiently.
Our major contributions are concluded as follows:
* **Problem and method.** To the best of our knowledge, we are the first work to study the problem of when to pre-train GNNs. We propose a W2PGNN framework to answer the question from a data generation perspective, which tells us the feasibility of performing graph pre-training before conducting effortful pre-training and fine-tuning.
* **Broad applications.** W2PGNN provides several practical applications: (1) provide the application scope of a graph pre-trained model, (2) measure the feasibility of performing pre-training for a downstream data and (3) choose the pre-training data so as to maximize downstream performance with limited resources.
* **Theory and Experiment.** We theoretically and empirically justify the effectiveness of W2PGNN. Extensive experiments on real-world graph datasets from multiple domains show that the proposed method can provide an accurate estimation of pre-training feasibility and the selected pre-training data can benefit the downstream performance.
## 2. Problem Formulation
In this section, we first formally define the problem of when to pre-train GNNs. Then, we provide a brief theoretical analysis of the transferable patterns in the problem we study, and finally discuss some non-transferable patterns.
Definition 1 (When to pre-train GNNs).: _Given the pretraining graph data \(\mathcal{G}_{train}\) and the downstream graph data \(\mathcal{G}_{down}\), our main goal is to answer to what extent the "pre-train and fine-tune" paradigm can benefit the downstream data._
Figure 1. Comparison of existing methods and proposed W2PGNN to answer _when to pre-train_ GNNs.
Note that in addition to this main problem, our proposed framework can also serve other scenarios, such as providing the application scope of graph pre-trained models, and helping select pre-training data to benefit the downstream (please refer to the _application cases_ in Section 4.1 for details).
Transferable graph patterns.The success of "pre-train and fine-tune" paradigm is typically attributed to the commonplace between pre-training and downstream data. However, in real-world scenarios, there possibly exists a significant divergence between the pre-training data and the downstream data. To answer the problem of when to pre-train GNNs, the primary task is to define the transferable patterns across graphs.
We here theoretically explore which patterns are transferable between pre-training and downstream data under the performance guarantee of graph pre-training model (with GNN as the backbone).
Theorem 2.1 (Transferability of graph pre-training model).: _Let \(G_{\text{train}}\) and \(G_{\text{down}}\) be two (sub)graphs sampled from \(\mathcal{G}_{\text{train}}\) and \(\mathcal{G}_{\text{down}}\), and assume the attribute of each node as a scalar \(1\) without loss of generality. Given a graph pre-training model \(e\) (instantiated as a GNN) with \(K\) layers and \(1-\)hop graph filter \(\Phi(L)\) (which is a function of the normalized graph Laplacian matrix \(L\)), we have_
\[\left\|e(G_{\text{train}})-e(G_{\text{down}})\right\|_{2}\leq\kappa\Lambda_{ \text{loop}}\left(G_{\text{train}},G_{\text{down}}\right) \tag{1}\]
_where \(\Lambda_{\text{loop}}\left(G_{\text{train}},G_{\text{down}}\right)=\frac{1} {mn}\sum_{l=1}^{m}\sum_{f^{\prime}=1}^{n}\left\|L_{g_{l}}-L_{g_{j}}\right\|_{2}\) measures the topological divergence between \(G_{\text{train}}\) and \(G_{\text{down}}\), where \(g_{i}\) is the \(K\)-hop ego-network of node \(i\) from \(G_{\text{train}}\) and \(L_{g_{i}}\) is its corresponding normalized graph Laplacian matrix, \(m\) and \(n\) are the number of nodes of \(G_{\text{train}}\) and \(G_{\text{down}}\). \(e(G_{\text{train}})\) and \(e(G_{\text{down}})\) are the output representations of \(G_{\text{train}}\) and \(G_{\text{down}}\) from graph pre-training model, \(\kappa\) is a constant relevant to \(K\), graph filter \(\Phi\), learnable parameters of GNN and the activation function used in GNN._
Detailed proofs and descriptions can be found in Appendix A.1. Theorem 2.1 suggests that two (sub)graphs sampled from pre-training and downstream data with similar topology are transferable via graph pre-training model (_i.e._, sharing similar representations produced by the model).
Hence we consider the transferable graph pattern as the topology of a (sub)graph, either node-level or graph-level. Specifically, the node-level transferable pattern could be the topology of the ego-network of a node (or the structural role of a node), irrespective of the node's exact location in the graph. The graph-level transferable pattern is the topology of the entire graph itself (_e.g._, molecular network). Such transferable patterns constitute the input space introduced in Section 4.1.
Discussion of non-transferable graph patterns.As a remark, we show that two important pieces of information (_i.e._, attributes and proximity) commonly used in graph learning are not necessarily transferable across pre-training and downstream data in most real-world scenarios, thus we do not discuss them in this paper.
First, although the attributes carry important semantic meaning in one graph, it can be shown that the attribute space of different graphs typically has little or no overlap at all. For example, if the pre-training and downstream data come from different domains, their nodes would indicate different types of entities and the corresponding attributes may be completely irrelevant. Even for graphs from the similar/same domain, the dimensions/meaning of their node attributes can also be totally different and result in misalignment.
The proximity, on the other hand, assumes that closely connected nodes are similar, which also cannot be transferred across graphs. Obviously, this proximity assumption depends on the overlaps in neighborhoods and thus only works on graphs with the same or overlapped node set.
## 3. Preliminary and related works
Graphons.A Graphon (short for graph function) (Belleelle and G
fine-tuning strategies, and then the resulting best downstream performance as the transferability measure. However, as depicted in Figure 1(a), such approach would be very costly to perform effortful pre-training and fine-tuning. Another way is based on graph properties, which leverage the graph properties (_e.g.,_ degree (Bordes and Zisserman, 2017), density (Zisserman and Zisserman, 2017), assortativity (Kang et al., 2018) and etc.) to measure the similarities between pre-training and downstream graphs, potentially can be utilized to approximate the transferability. Some other works also focus on analyzing the transferability of GNNs theoritically (Kang et al., 2018; Li et al., 2019). Nevertheless, they are limited to measure the transferability of GNNs on a single graph or when training and testing data are from the same dataset (Kang et al., 2018; Li et al., 2019), which are inapplicable to our setting. A recent work, EGI (Kang et al., 2019) addresses the transferability measure problem of GNNs across graphs. However, EGI is a model-specific measure and depend on its own framework. For the first time, we study the transferability of graph pre-training from the data perspective, without performing any pre-training and fine-tuning.
## 4. Methodology
In this section, we first present our proposed framework W2PGNN to answer when to pre-train GNNs in Section 4.1. Based on the framework, we further introduce the measure of the feasibility of performing pre-training in Section 4.2. Then in Section 4.3, we discuss our approximation to the feasibility of pre-training. Finally, the complexity analysis of W2PGNN is provided in Section 4.4.
### Framework Overview
W2PGNN framework provides a guide for answering _when to pre-train GNNs from a graph data generation perspective_. The key insight is that if downstream data can be generated with high probability by a graph generator that summarizes the pre-training data, the downstream data would present high feasibility of performing pre-training.
The overall framework of W2PGNN can be found in Figure 2. Given the _input space_ consisting of pre-training graphs, we fit them into a graph generator in the _generator space_, from which the graphs generated constitute the _possible downstream space_. More specifically, an ideal graph generator should inherit different kinds of topological patterns, based on which new graphs can be induced. Therefore, we first construct a graphon basis \(\mathcal{B}=\{B_{1},B_{2},\cdots,B_{k}\}\), where each element \(B_{i}\) represents a graphon fitted from a set of (sub)graphs with similar patterns (_i.e.,_ the blue dots ). To access different combinations of generator basis, each \(B_{i}\) is assigned with a corresponding weight \(\alpha_{i}\) (_i.e.,_ the width of blue arrow ) and their combination gives rise to a graph generator (_i.e.,_ the blue star ). All weighted combinations compose the generator space \(\Omega\) (_i.e.,_ the gray surface ), from which graphs generated form the possible solution space of downstream data (shorted as possible downstream space). The generated graphs are those that could benefit from the pre-training data, we say that they exhibit _high feasibility_ of performing pre-training.
In the following, we introduce the workflow of W2PGNN in the input space, the generator space and the possible downstream space in detail. Then, the application cases of W2PGNN are given for different practical use.
**Input space.** The input space of W2PGNN is composed of nodes' ego-networks or graphs. For node-level pre-training, we take the nodes' ego-networks to constitute the input space; For graph-level pre-training, we take the graphs (_e.g.,_ small molecular graphs) as input space.
**Generator space.** As illustrated in Figure 2, each point (_i.e.,_ graph generator) in the generator space \(\Omega\) is a convex combination of generator basis \(\mathcal{B}=\{B_{1},B_{2},\cdots,B_{k}\}\). Formally, we define the graph generator as
\[f(\{\alpha_{i}\},\{B_{i}\})=\sum_{i=1}^{k}\alpha_{i}B_{i},\ \ \text{where}\ \sum_{i=1}^{k}\alpha_{i}=1,\alpha_{i}\geq 0. \tag{2}\]
Different choices of \(\{\alpha_{i}\},\{B_{i}\}\) comprise different graph generators. All possible generators constitute the _generator space_\(\Omega=\{f(\{\alpha_{i}\},\{B_{i}\})\mid\forall\ \{\alpha_{i}\},\{B_{i}\}\}\).
We shall also note that, the graph generator \(f(\{\alpha_{i}\},\{B_{i}\})\) is indeed a mixed graphon, (_i.e.,_ mixture of \(k\) graphons \(\{B_{1},B_{2},\cdots,B_{k}\}\)), where each element \(B_{i}\) represents a graphon estimated from a set of similar pre-training (sub)graphs. Furthermore, it can be theoretically justified that the mixed version still preserve the properties of graphons (c.f. Theorem 5.1) and the key transferable patterns inherited in \(B_{i}\) (c.f. Theorem 5.2). Thus the graph generator \(f(\{\alpha_{i}\},\{B_{i}\})\), _i.e.,_ mixed graphon, can be considered as a representative and comprehensive summary of pre-training data, from which unseen graphs with different combinations of transferable patterns can be induced.
**Possible downstream space.** All the graphs produced by the generators in the generator space \(\Omega\) could benefit from the pre-training, and finally form the possible downstream space.
Formally, for each generator in the generator space \(\Omega\) (we denote it as \(f\) for simplicity), we can generate a \(n\)-node graph as follows. First, we independently sample a random latent variable for each node. Then for each pair of nodes, we assign an edge between them with the probability equal to the value of the graphon at their randomly sampled points. The graph generation process can be formulated as
\[\begin{split}&\vspace{0.1cm}\cdots,\vspace{0.1cm}\text{Uniform}(\{0,1\}),\\ &\vspace{0.1cm}A_{ij}\sim\text{Bernoulli}(f(\vspace{0.1cm}v_{j}) ),\quad\forall i,j\in\{1,2,\ldots,n\},\end{split} \tag{3}\]
Figure 2. Illustration of our proposed framework W2PGNN to answer when to pre-train GNNs.
where \(f(\{a_{i},v_{j}\})\in[0,1]\) indicates the corresponding value of the graphon at point \((v_{i},v_{j})\)1, and \(A_{ij}\in\{0,1\}\) indicates the existence of edge between \(i\)-th node and \(j\)-th node. The adjacency matrix of the sampled graph \(G\) is denoted as \(A=[A_{ij}]\in\{0,1\}^{n\times n},\forall i,j\in[n]\). We summarize this generation process as \(G\gets f\).
Footnote 1: For simplicity, we slightly abuse the notations \(f(\cdot,\cdot)\). Note that \(f(\{a_{i}\},\{B_{i}\})\) is a function of \(\{a_{i}\}\) and \(\{B_{i}\}\), representing that the generator depends on \(\{a_{i}\}\), \(\{B_{i}\}\); while for each generator \(\{a_{i},\,\text{mixed graphon}\}\) given \(\{a_{i}\}\), \(\{B_{i}\}\), it can be represented as a continuous, bounded and symmetric function \(f:[0,1]^{2}\rightarrow[0,1]\).
Therefore, with all generators from the generator space \(\Omega\), the possible downstream space is defined as \(\mathcal{D}=\{G\gets f|f\in\Omega\}\). Note that for each \(\{a_{i}\},\{B_{i}\}\), we have a generator \(f\); and for each generator, we also have different generated graphs. Besides, we theoretically justify that the generated graphs in the possible downstream space can inherit key transferable graph patterns in our generator (c.f. Theorem 5.3).
**Application cases.** The proposed framework is flexible to be adopted in different application scenarios when discussing the problem of when to pre-train GNNs.
* _Use case 1: provide a user guide of a graph pre-trained model._ The possible downstream space \(\mathcal{D}\) serves as a user guide of a graph pre-trained model, telling the application scope of graph pre-trained models (_i.e._, the possible downstream graphs that can benefit from the pre-training data).
* _Use case 2: estimate the feasibility of performing pre-training from pre-training data to downstream data._ Given a collection of pre-training graphs and a downstream graph, one can directly measure the feasibility of performing pre-training on pre-training data, before conducting costly pre-training and fine-tuning attempts. By making such pre-judgement of a kind of transferability, some unnecessary and expensive parameter optimization steps during model training and evaluation can be avoided.
* _Use case 3: select pre-training data to benefit the downstream_. In some practical scenarios where the downstream data is provided (_e.g._, a company's need is to boost downstream performance of its business data), the feasibility of pre-training inferred by W2PGNN can be utilized to select data for pre-training, such that the downstream performance can be maximized with limited resources.
Use case 1 can be directly given by our produced possible downstream space \(\mathcal{D}\). However, how to measure the feasibility of pre-training in use case 2 and 3 still remains a key challenge. In the following sections, we introduce the formal definition of the feasibility of pre-training and its approximate solution.
### Feasibility of Pre-training
If a downstream graph can be generated with a higher probability from any generator in the generator space \(\Omega\), then the graph could benefit more from the pre-training data. We therefore define the feasibility of performing pre-training as the highest probability of the downstream data generated from a generator in \(\Omega\), which can be formulated as an optimization problem as follows.
Definition 2 (Feasibility of graph pre-training).: _Given the pre-training data \(\mathcal{G}_{train}\) and downstream data \(\mathcal{G}_{down}\), we have the feasibility of performing pre-training on \(\mathcal{G}_{train}\) to benefit \(\mathcal{G}_{down}\) as_
\[\zeta(\mathcal{G}_{train}\rightarrow\mathcal{G}_{down})=\sup_{\{a_{i}\},\{B_{i }\}}\Pr\left(\mathcal{G}_{down}\mid f(\{a_{i}\},\{B_{i}\})\right), \tag{4}\]
_where \(\Pr\left(\mathcal{G}_{down}\mid f(\{a_{i}\},\{B_{i}\})\right)\) denotes the probability of the graph sequence sampled from \(\mathcal{G}_{down}\) being generated by graph generator \(f(\{a_{i}\},\{B_{i}\})\); each (sub)graph represents an ego-network (for node-level task) or a graph (for graph-level task) sampled from the downstream data \(\mathcal{G}_{down}\)._
However, the probability \(\Pr\left(\mathcal{G}_{down}\mid f(\{a_{i}\},\{B_{i}\})\right)\) of generating the downstream graph from a generator is extremely hard to compute, we therefore turn to converting the optimization problem (4) to a tractable problem. Intuitively, if generator \(f(\{a_{i}\},\{B_{i}\})\) can generate the downstream data with higher probability, it potentially means that the underlying generative patterns of pre-training data (characterized by \(f(\{a_{i}\},\{B_{i}\})\)) and downstream data (characterized by the graphon \(\mathcal{B}_{down}\) fitted from \(\mathcal{G}_{down}\)) are more similar. Accordingly, we turn to figure out the infimum of the distance between \(f(\{a_{i}\},\{B_{i}\})\) and \(B_{down}\) as the feasibility, _i.e._,
\[\zeta(\mathcal{G}_{train}\rightarrow\mathcal{G}_{down})=-\inf_{\{a_{i}\},\{B_{ i}\}}\operatorname{dist}(f(\{a_{i}\},\{B_{i}\}),B_{down}). \tag{5}\]
Following (Zhou et al., 2018), we hire the 2-order Gromov-Wasserstein (GW) distance as our distance function \(\operatorname{dist}(\cdot,\cdot)\), as GW distance is commonly used to measure the difference between structured data.
Additionally, we establish a theoretical connection between the above-mentioned distance and the probability of generating the downstream data in extreme case, which further adds to the integrity and rationality of our solution.
Theorem 4.1 ().: _Given the graph sequence sampled from downstream data \(\mathcal{G}_{down}\), we estimate its corresponding graphon as \(\mathcal{B}_{down}\). If a generator \(f\) can generate the downstream graph sequence with probability 1, then \(\operatorname{dist}(f,\mathcal{B}_{down})=0\)._
### Choose Graphon Basis to Approximate Feasibility
Although the feasibility has been converted to the optimization problem (5), exhausting all possible \(\{a_{i}\},\{B_{i}\}\) to find the infimum is impractical. An intuitive idea is that we can choose some appropriate graphon basis \(\{B_{i}\}\), which can not only prune the search space but also accelerate the optimization process. Therefore, we aim to first reduce the search space of graphon basis \(\{B_{i}\}\) and then learn the optimal \(\{a_{i}\}\) in the reduced search space.
Considering that the downstream data may be formed via different generation mechanisms (implying various transferable patterns), a single graphon basis might have limited expressivity and completeness to cover all patterns. We therefore argue that a good reduced search space of graphon basis should cover a set of graphon bases. Here, we introduce three candidates of them as follows.
**Integrated graphon basis.** The first candidate of graphon basis is the integrated graphon basis \(\{B_{i}\}_{\text{integer}}\). This graphon basis is introduced based on the assumption that the pre-training and the downstream graphs share very similar patterns. For example, the pre-training and the downstream graphs might come from social networks of different time spans (Kipip et al., 2018). In the situation, almost all patterns involved in the pre-training data might be useful for the
downstream. To achieve this, we directly utilize all (sub)graphs sampled from the pre-training data to estimate one graphon as the graphon basis. This integrated graphon basis serves as a special case of the graphon basis introduced below.
**Domain graphon basis.** The second candidate of graphon basis is the domain graphon basis \(\{B_{i}\}_{\text{domain}}\). The domain information that pre-training data comes from is important prior knowledge to indicate the transferability from the pre-training to downstream data. For example, when the downstream data is molecular network, it is more likely to benefit from the pre-training data from specific domains like biochemistry. This is because the specificity of molecules makes it difficult to learn transferable patterns from other domains, _e.g._, closed triangle structure represents diametrically opposite meanings (stable vs unstable) in social network and molecular network. Therefore, we propose to split the (sub)graphs sampled from pre-training data according to their domains, and each split of (sub)graphs will be used to estimate a graphon as a basis element. In this way, each basis element reflects transferable patterns from a specific domain, and all basis elements construct the domain graphon basis \(\{B_{i}\}_{\text{domain}}\).
**Topological graphon basis.** The third candidate is the topological graphon basis \(\{B_{i}\}_{\text{topo}}\). The topological similarity between the pre-training and the downstream data serves as a crucial indicator of transferability. For example, a downstream social network might benefit from the similar topological patterns in academic or web networks (_e.g._, closed triangle structure indicates stable relationship in all these networks). Then, the problem of finding topological graphon basis can be converted to partition \(n\) (sub)graphs sampled from pre-training data into \(k\)-split according to their topology similarity, where each split contains (sub)graphs with similar topology. Each element of graphon basis (_i.e._, graphon) fitted from each split of (sub)graphs is expected to characterize a specific kind of topological transferable pattern.
However, the challenge is that for graph structured data that is irregular and complex, we cannot directly measure the topological similarity between graphs. To tackle this problem, we introduce a _graph feature extractor_ that maps arbitrary graph into a fixed-length vector representation. To approach a comprehensive and representative set of topological features, we here consider both node-level and graph-level properties.
For node-level topological features, we first apply a set of node-level property functions \([\phi_{1}(v),\cdots,\phi_{m_{1}}(v)]\) for each node \(v\) in graph \(G\) to capture the local topological features around it. Considering that the numbers of nodes of two graphs are possibly different, we introduce an aggregation function AGG to summarize the node-level property of all nodes over \(G\) to a real number AGG\((\{\phi_{i}(v),v\in G\})\). We can thus obtain the node-level topological vector representation as follows.
\[h_{\text{node}}(G)=[\text{AGG}(\{\phi_{1}(v),v\in G\}),\cdots,\text{AGG}(\{ \phi_{m_{1}}(v),v\in G\})].\]
In practice, we calculate degree (Belleelle and Solla, 2017), clustering coefficient (Kang et al., 2017) and closeness centrality (Kang et al., 2017) for each node and instantiate the aggregation function AGG as the mean aggregator.
For graph-level topological features, we also employ a set of graph-level property functions for each graph \(G\) to serve as the vector representation
\[h_{\text{graph}}(G)=[\{y_{1}(G),\cdots,y_{m_{2}}(G)\},\]
where density (Kang et al., 2017), assortativity (Kang et al., 2017), transitivity (Kang et al., 2017) are adopted as graph-level properties here 2.
Footnote 2: Other graph-level properties can also be utilized like _diameter and Wiener index_, but we do not include them due to their high computational complexity.
Finally, the final representation of \(G\) produced by the graph feature extractor is
\[h=[h_{\text{local}}(G)||h_{\text{global}}(G)]\in\mathbb{R}^{m_{1}+m_{2}},\]
where \(||\) is the concatenation function that combines both node-level and graph-level features. Given the topological vector representation, we leverage an efficient clustering algorithm K-Means (Kang et al., 2017) to obtain k-splits of (sub)graphs and finally fit each split into a graphon as one element of topological graphon basis.
**Optimization solution.** Given the above-mentioned three graphon bases, the choice of graphon basis \(\{B_{i}\}\) can be specified to one of them. In this way, the pre-training feasibility (simplified as \(\zeta\)) could be approximated in the reduced search space of graphon basis as
\[\zeta\leftarrow-\text{MIN}(\{\inf_{\{a_{i}\}}\text{dist}(f(\{a_{i}\},\{B_{i} \}),B_{\text{down}}),\forall\{B_{i}\}\in\mathcal{B}\}), \tag{6}\]
where \(\mathcal{B}\)=\(\{(B_{i}\}_{\text{topo}},\{B_{i}\}_{\text{domain}},\{B_{i}\}_{\text{interj}})\) is the reduced search space of \(\{B_{i}\}\). Thus, the problem can be naturally splitted into three sub-problems with objective of \(\text{dist}(f(\{a_{i}\},\{B_{i}\}_{\text{topo}}),B_{\text{down}})\), \(\text{dist}\) (\(f(\{a_{i}\},\{B_{i}\}_{\text{domain}}),B_{\text{down}})\)) and \(\text{dist}(f(\{a_{i}\},\{B_{i}\}_{\text{interj}}),B_{\text{down}})\)) respectively. Each sub-problem can be solved by updating the corresponding learnable parameters \(\{a_{i}\}\) with multiple gradient descent steps. Taking one step as an example, we have
\[\{a_{i}\}=\{a_{i}\}-\eta\nabla_{\{a_{i}\}}\text{dist}(f(\{a_{i}\},\{B_{i}\}),B _{\text{down}}) \tag{7}\]
where \(\eta\) is the learning rate. Finally, we achieve three infimum distances under different \(\{B_{i}\}\in\mathcal{B}\) respectively, the minimum value among them is the approximation of pre-training feasibility. In practice, we adopt an efficient and differential approximation of GW distance, _i.e._, entropic regularization GW distance (Kang et al., 2017), as the distance function. For graphon estimation, we use the "largest gap" method as to estimate graphon \(B_{i}\).
### Computation Complexity
We now show that the time complexity of W2PNN is much lower than traditional solution. Suppose that we have \(n_{1}\) and \(n_{2}\) (sub)graphs sampled from pre-training data and downstream data respectively, and denote \(|V|\) and \(|E|\) as the average number of nodes and edges per (sub)graph. The overall time complexity of W2PGNN is \(O((n_{1}+n_{2})|V|^{2})\). For comparison, traditional solution in Figure 1(a) to estimate the pre-training feasibility should make \(l_{1}\times l_{2}\) "pre-train and fine-tune" attempts, if there exist \(l_{1}\) pre-training models and \(l_{2}\) fine-tuning strategies. Suppose the batch size of pre-training as \(b\) and the representation dimension as \(d\). The overall time complexity of traditional solution is \(O\left(l_{1}b\zeta((n_{1}+n_{2})(|V|^{3}+|E|d)+n_{1}bd)\right)\). Detailed analysis can be found in Appendix D.
## 5. Theoretical Analysis
In this section, we theoretically analyze the rationality of the generator space and possible downstream space in W2PGNN. Detailed proofs of the following theorems can be found in Appendix A.
### Theoretical Justification of Generator Space
Our generator preserves the properties of graphons.We first theoretically prove that any generator in the generator space still preserve the properties of graphon (_i.e._, a bounded symmetric function \(\left[0,1\right]^{2}\rightarrow\left[0,1\right]\), summarized in the following theorem.
Theorem 5.1 ().: _For a set of graphon basis \(\{B_{i}\}\), the corresponding generator space \(\Omega=\{f(\{\alpha_{i}\},\{B_{i}\})\mid\forall\{\alpha_{i}\},\{B_{i}\}\}\) is the convex hull of \(\{B_{i}\}\)._
Our generator preserves the key transferable patterns in graphon basis.As a preliminary, we first introduce the concept of _graph motifs_ as a useful description of transferable graph patterns and leverage _homomorphism density_ as a measure to quantify the degree to which the patterns inherited in a graphon.
Definition 3 (Graph motifs(Graham, 1994)).: _Given a graph \(G=(V,E)\) (\(V\) and \(E\) are node set and edge set), graph motifs are substructure \(F=(V^{\prime},E^{\prime})\) that recur significantly in statistics, where \(V^{\prime}\subset V,E^{\prime}\subset E\) and \(|V^{\prime}|\ll|V|\)._
Graph motifs can be roughly taken as the key transferable graph patterns across graphs (Zhu et al., 2017). For example, the motif (X) has the same meaning of "feedforward loop" across networks of control system, gene systems or organisms.
Then, we introduce the measure of homomorphism density \(t(F,B)\) to quantify the relative frequency of the key transferable pattern, _i.e._, graph motifs \(F\), inherited in graphon \(B\).
Definition 4 (Homomorphism density(Graham, 1994)).: _Consider a graph motif \(F=(V^{\prime},E^{\prime})\), we define a homomorphisms of \(F\) into graph \(G=(V,E)\) as an adjacency-preserving map from \(V^{\prime}\) to \(V\), where \((i,j)\in E\) implies \((i,j)\in E\). There could be multiple maps from \(V^{\prime}\) to \(V\), but only some of them are homomorphisms. Therefore, the definition of homomorphism density \(t(F,G)\) is introduced to quantify the relative frequency with which the graph motif \(F\) appears in \(G\)._
_Analogously, the homomorphism density of graphs can be extended into the graphon \(B\). We denote \(t(F,B)\) as the homomorphism density of graph motif \(F\) into graphon \(B\), which represents the relative frequency of \(F\) occurring in a collection of graphs \(\{G_{i}\}\) that convergent to graphon \(B\), i.e., \(t(F,B)=\lim_{i\rightarrow\infty}t\) (\(F,\{G_{i}\}\))._
Now, we are ready to quantify how much the transferable patterns in graphon basis can be preserved in our generator by exploring the difference between the homomorphism density of graph motifs into the graphon basis and that into our generator.
Theorem 5.2 ().: _Assume a graphon basis \(\{B_{1},\cdots,B_{k}\}\) and their convex combination \(f(\{\alpha_{i}\},\{B_{i}\})=\sum_{i=1}^{k}a_{i}B_{i}\). The \(a\)-th element of graphon basis \(B_{a}\) corresponds to a motif set. For each motif \(F_{a}\) in the motif set, the difference between the homomorphism density of \(F_{a}\) in \(f(\{\alpha_{i}\},\{B_{i}\})\) and that in basis element \(B_{a}\) is upper bounded by_
\[|t(F_{a},f(\{\alpha_{i}\},\{B_{i}\}))-t(F_{a},B_{a})|\leq\sum_{b=1,b\neq a}^{ k}|F_{a}|\alpha_{b}||B_{b}-B_{a}||_{\Omega} \tag{8}\]
_where \(|F_{a}|\) represents the number of nodes in motif \(F_{a}\), and \(||\cdot||_{\Omega}\) denotes the cut norm._
Theorem 5.2 indicates the graph motifs (_i.e._, key transferable patterns) inherited in each basis element can be preserved in our generator, which justifies the rationality to take the generator as a representative and comprehensive summary of pre-training data.
### Theoretical Justification of Possible Downstream Space
The possible downstream space includes the graphs generated from generator \(f(\{\alpha_{i}\},\{B_{i}\})\). We here provide a theoretical justification that the generated graphs in possible downstream space can inherit key transferable graph patterns (_i.e._, graph motifs) in the generator.
Theorem 5.3 ().: _Given a graph generator \(f(\{\alpha_{i}\},\{B_{i}\})\), we can obtain sufficient number of random graphs \(\mathbb{G}=\mathbb{G}(n,f(\{\alpha_{i}\},\{B_{i}\}))\) with \(n\) nodes generated from \(f(\{\alpha_{i}\},\{B_{i}\})\). The homomorphism density of graph motif \(F\) in \(\mathbb{G}\) can be considered approximately equal to that in \(f(\{\alpha_{i}\},\{B_{i}\})\) with high probability, which can be represented as_
\[\mathrm{P}(|t(F,\mathbb{G})-t(F,f(\{\alpha_{i}\},\{B_{i}\}))|>\varepsilon) \leq 2\exp\left(-\frac{\varepsilon^{2}n}{8\nu(F)^{2}}\right), \tag{9}\]
_where \(\nu(F)\) denotes the number of nodes in \(F\), and \(0\leq\varepsilon\leq 1\)._
Theorem 5.3 indicates that the homomorphism density of graph motifs into the generated graphs in the possible downstream space can be inherited from our generator to a significant degree.
## 6. Experiments
In this section, we evaluate the effectiveness of W2PGNN with the goal of answering the following questions: (1) Given the pre-training and downstream data, is the feasibility of pre-training estimated by W2PGNN positively correlated with the downstream performance (Use case 2)? (2) When the downstream data is provided, does the pre-training data selected by W2PGNN actually help improve the downstream performance (Use case 3)?
Note that it is impractical to empirically evaluate the application scope of graph pre-trained models (Use case 1), as we cannot enumerate all graphs in the possible downstream space. Whereas, by answering question (1), it can be indirectly verified that a part of graphs in the possible downstream space, _i.e._, the downstream graphs with high feasibility, indeed benefit from the pre-training.
### Experimental Setup
We validate our proposed framework on both node classification and graph classification task.
**Datasets.** For node classification task, we directly adopt six datasets from (Zhu et al., 2017) as the candidates of pre-training data, which consists of Academia, DBLP(SNAP), DBLP(NetRep), IMDB, Facebook and LiveJournal (from academic, movie and social domains). Regarding the downstream datasets, we adopt US-Airport and H-Index from (Zhu et al., 2017) and additionally add two more datasets Chameleon and Europe-Airport for a more comprehensive results.
For graph classification task, we choose the large-scale datasets ZINC15(Zhu et al., 2017) containing \(2\) million unlabeled molecules. To enrich the follow-up experimental analysis, we use scaffold split to partition the ZINC15 into five datasets (ZINC15-0,ZINC15-1,ZINC15-2,ZINC15-3 and ZINC15-4) according to their scaffolds (Kong et al., 2017), such
that the scaffolds are different in each dataset. Regarding the downstream datasets, we use 5 classification benchmark datasets contained in MoleculeNet (Wang et al., 2017). For downstream datasets, we use BACE, BBBP, MUV, HIV and ClinTox provided in (Wang et al., 2017).
The dataset details are summarized in Appendix B.
**Baseline of graph pre-training measures.** The baselines can be divided into 3 categories: (1) EGI (Wang et al., 2017) computes the difference between the graph Laplacian of (sub)graphs from pre-training data and that from downstream data; (2) Graph Statistics,by which we merge average degree, degree variance, density, degree assortativity coefficient, transitivity and average clustering coefficient to construct a topological vector for each (sub)graph. (3) Clustering Coefficient, Spectrum of Graph Laplacian, and Betweenness Centrality, by which we adopt the distributions of graph properties as topological vectors. For the second and third category of baselines, we calculate the negative value of Maximum Mean Discrepancy (MMD) distance between the obtained topological vectors of the (sub)graph from pre-training data and that from downstream data.
Note that in all baselines, the distance/difference is computed between one ego-network (for node classification) or graph (for graph classification) from pre-training data and another one from downstream data. For efficiency, when conducting node classification, we randomly sample 10% nodes and extract their 2-hop ego-networks for each candidate pre-training dataset, and extract 2-hop ego-networks of all nodes for each downstream dataset. For graph classification, we randomly select 10% graphs for each candidate pre-training dataset and downstream dataset. Then we take the average of all distances/differences as the final measure.
**Implementation Details.** For node classification tasks, we randomly sample 1000 nodes for each pre-training dataset and extract 2-hops ego-networks of sampled nodes to compose our input space, and extract 2-hops ego-networks of all nodes in each downstream dataset to estimate the graphon. For graph classification tasks, we take all graphs in each pre-training dataset to compose our input space and we use all graphs in each downstream dataset to estimate its corresponding graphon. When constructing topological graphon basis, we set the the number of clusters \(k=5\). The maximum iterations number of K-Means is set as 300. When constructing domain graphon basis, we take each pre-training dataset as a domain. For graphon estimation, we use the largest gap (Beng et al., 2017) approach and let the block size of graphon as the average number of nodes in all graphs. When learning \(\alpha_{i}\), we adopt Adam as the optimizer and set the learning rate \(\eta\) as 0.05. When calculating the GW distance, we utilize its differential and efficient version entropic regularization GW distance with default hyperparameters (Krizhevsky et al., 2012).
### Results of Pre-training Feasibility
**Setup.** When evaluating the pre-training feasibility, since its ground truth is unavailable, we adopt the best downstream performance among a set of graph pre-training models as the ground truth.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline & \multicolumn{6}{c}{\(N=2\)} & \multicolumn{6}{c}{\(N=3\)} \\ \cline{2-13} & US-Airport & Europe-Airport & H-index & Chameleon & Rank & US-Airport & Europe-Airport & H-index & Chameleon & Rank \\ \hline Graph Statistics & -0.6068 & 0.3571 & -0.6220 & -0.2930 & 10 & -0.7096 & -0.5052 & -0.2930 & -0.8173 & 10 \\ EGI & 0.6672 & -0.6077 & -0.2152 & -0.2680 & 9 & -0.2358 & -0.5540 & -0.2822 & -0.6511 & 9 \\ Clustering Coefficient & -0.0273 & 0.1519 & 0.3622 & 0.3130 & 5 & -0.0039 & 0.2069 & 0.4829 & 0.2279 & 4 \\ Spectrum of Graph Laplacian & -0.2023 & 0.1467 & 0.0794 & 0.0095 & 8 & -0.7648 & -0.4311 & 0.2611 & -0.2300 & 8 \\ Betweenness Centrality & -0.2739 & -0.2554 & 0.2051 & 0.2241 & 7 & -0.3421 & -0.5903 & 0.1364 & 0.0849 & 7 \\ \hline W2PGNN (interject) & 0.3579 & 0.1224 & 0.3313 & 0.1072 & 6 & 0.0841 & 0.5310 & 0.4213 & -0.0916 & 6 \\ W2PGNN (domain) & **0.4774** & 0.4666 & 0.6775 & 0.3460 & 3 & **0.7132** & 0.5523 & **0.7381** & 0.1857 & 3 \\ W2PGNN (topo) & 0.2059 & 0.3908 & 0.3745 & 0.4464 & 4 & 0.4900 & 0.5061 & 0.4072 & 0.1497 & 5 \\ W2PGNN (\(x=1\)) & 0.4172 & 0.5206 & 0.6829 & 0.4391 & 2 & 0.5282 & 0.6663 & 0.7240 & **0.3246** & 1 \\ W2PGNN & 0.3941 & **0.5336** & **0.7162** & **0.4838** & 1 & 0.5089 & **0.6706** & 0.6754 & 0.3166 & 2 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Pearson correlation coefficient between the estimated pre-training feasibility and the best downstream performance on node classification. \(N\) denotes the number of candidate pre-training datasets that form the pre-training data. Bold indicates the highest coefficient. “Rank” represents the overall ranking on all downstream datasets.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline & \multicolumn{6}{c}{\(N=2\)} & \multicolumn{6}{c}{\(N=3\)} \\ \cline{2-13} & BACE & BBBP & MUV & HIV & ClinTox & Rank & BACE & BBBP & MUV & HIV & ClinTox & Rank \\ \hline Graph Statistics & -0.4118 & -0.1328 & 0.3858 & 0.0174 & -0.3577 & 9 & -0.3093 & -0.1430 & 0.1946 & 0.3545 & -0.1372 & 7 \\ EGI & 0.2912 & -0.6862 & 0.4488 & 0.0587 & 0.0452 & 7 & 0.4570 & 0.3230 & 0.3024 & 0.4144 & -0.0085 & 3 \\ Clustering Coefficient & -0.5098 & -0.5097 & 0.3754 & 0.4738 & 0.5154 & 8 & -0.4080 & 0.3217 & -0.1190 & -0.2483 & -0.4248 & 9 \\ Spectrum of Graph Laplacian & -0.0633 & -0.4878 & -0.3413 & -0.1125 & -0.2562 & 10 & -0.3563 & -0.1611 & -0.2294 & -0.2448 & 0.3001 & 8 \\ Betweenness Centrality & -0.0021 & -0.7755 & 0.4040 & 0.0339 & 0.3411 & 6 & -0.3695 & -0.4568 & -0.2752 & -0.3035 & -0.2129 & 10 \\ \hline W2PGNN (interject) & 0.7547 & **0.7790** & 0.2907 & 0.7033 & 0.5639 & 3 & 0.4081 & 0.4687 & -0.0567 & 0.3802 & 0.4354 & 5 \\ W2PGNN (domain) & 0.7334 & 0.7689 & 0.5395 & 0.6831 & 0.5431 & 5 & 0.0864 & 0.3680 & 0.0187 & 0.4784 & 0.3765 & 6 \\ W2PGNN (topo) & 0.6656 & 0.7164 & 0.8131 & **0.7391** & 0.5406 & 2 & 0.1109 & 0.5357 & 0.0514 & 0.3265 & 0.4724 & 4 \\ W2PGNN (\(a=1\)) & 0.6549 & 0.7690 & 0.6730 & 0.7033 & 0.5639 & 4 & 0.5287 & **0.7102** & 0.1925 & 0.5893 & 0.5430 & 2 \\ W2PGNN & **0.7549** & 0.7767 & **0.8131** & 0.7044 & **0.5784** & **1** & **0.6207** & 0.6696 & **0.5227** & **0.6529** & **0.5994** & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Pearson correlation coefficient between the feasibility and the best downstream optimization.
For node classification tasks, we use the following 4 graph pre-training models: GraphCL [52] and GCC models [32] with three different hyper-parameter (_i.e._, 128, 256 and 512 rw-hops). For graph classification tasks, we adopt 7 SOTA pre-training models: AttMarkMasing [15], ContextPred [15], EdgePred [15], Infomax [15], GraphCL [52], GraphMAE [14] and JOAO [51]. When pre-training, we directly use the default hyper-parameters of pre-training models except the rw-hops in GCC. During fine-tuning, we freeze the parameters of pre-trained models and utilize the logistic regression as classifier for node classification and SVM as classifier for graph classification, following [32] and its fine-tuning hyper-parameters. The downstream results are reported as the average of Micro F1 and ROC-AUC under 10 runs on node classification and graph classification respectively. For each downstream task, the best performance among all methods is regarded as the ground truth.
For a comprehensive evaluation on the correlation between the estimated pre-training feasibility and the above ground truth (_i.e._, best downstream performance), we need to construct multiple \(\langle\mathcal{G}_{\text{train}},\mathcal{G}_{\text{down}}\rangle\) sample pairs as our evaluation samples. When constructing the \(\langle\mathcal{G}_{\text{train}},\mathcal{G}_{\text{down}}\rangle\) sample pairs for each downstream data, multiple pre-training data are required to be paired with it. Hence we adopt the following two settings to augment the choice of pre-training data for more possibilities. Here we use \(N\) as the number of dataset candidates contained in pre-training data. (1) For \(N=2\) setting, we randomly select 2 pre-training dataset candidates as pre-training data and enumerate all possible cases. (2) For \(N=3\) setting, we randomly select 3 pre-training dataset candidates as pre-training data. We enumerate all possible cases for graph classification tasks and randomly select 40% of all cases for node classification tasks for efficiency.
**Results.** Table 1 (for node classification) and Table 2 (for graph classification) show the Pearson correlation coefficient between the best downstream performance and the estimated pre-training feasibility by W2PGNN and baselines for each downstream dataset. A higher coefficient indicates a better estimation of pre-training feasibility. We also include 4 variants of W2PGNN: W2PGNN (intergr), W2PGNN (domain) and W2PGNN (topo) only utilize the integrated graphon basis, domain graph basis and topological graphon basis to approximate feasibility respectively, and W2PGNN (\(\alpha=1\)) directly set the learnable combination weights \(\{\alpha_{i}\}\) as fixed constant 1. We have the following observations. (1) The results show that our model achieve the highest overall ranking in most cases, indicating the superiority of our proposed framework. (2) We find that the measures provided by other baselines sometimes show no correlation or negative correlation with the best downstream performance. (3) Comparing W2PGNN and its 4 variants, we find that although the variants sometimes achieve superior performance on some downstream datasets, they cannot consistently perform well on all datasets. In contrast, the top-ranked W2PGNN can provide a more comprehensive picture with various graph bases and learnable combination weights.
To provide a deeper understanding of the feasibility estimated by W2PGNN, Figure 3 shows our estimated pre-training feasibility (in x-axis) versus the best downstream performance on node classification (in y-axis) of all \(<\)pre-training data, downstream data- pairs (one point represents the result of one pair) when the selection budget is 2. The plots when the selection budget is 3 and the plots under graph classification can be found in Appendix C.1. We find that there exist a strong positive correlation between estimated pre-training feasibility and the best downstream performance on
Figure 3. Pre-training feasibility vs. the best downstream performance on node classification when the selection buget is 2.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline & \multicolumn{6}{c}{\(N=2\)} & \multicolumn{6}{c}{\(N=3\)} \\ \cline{2-10} & US-Airport & Europe-Airport & H-index & Chameleon & Rank & US-Airport & Europe-Airport & H-index & Chameleon & Rank \\ \hline All Datasets & 65.62 & 55.65 & 75.22 & 46.81 & - & 65.62 & 55.65 & 75.22 & 46.81 & - \\ \hline Graph Statistics & 64.20 & 53.36 & 74.30 & 44.31 & 4 & 62.27 & 54.58 & 72.88 & 43.87 & 5 \\ EGI & **64.96** & 57.37 & 74.30 & 43.21 & 2 & 62.27 & 57.36 & 72.28 & 45.93 & 3 \\ Clustering Coefficient & 62.61 & 52.87 & **77.74** & 43.21 & 3 & 62.94 & 54.58 & 75.18 & 44.66 & 4 \\ Spectrum of Graph Laplacian & 61.76 & **57.88** & 73.14 & 42.20 & 5 & **63.95** & 54.87 & 73.90 & 44.66 & 2 \\ Betweenness Centrality & **64.96** & 52.87 & 73.50 & 41.63 & 6 & 62.27 & 54.87 & 75.18 & 43.87 & 6 \\ \hline W2PGNN & **64.96** & **57.88** & 77.24 & **45.54** & 1 & **63.95** & **57.59** & **75.68** & **46.07** & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Node classification results when performing pre-training on different selected pre-training data. We also provide the results of using all pre-training data without selection for your reference (see “All Datasets” in the table).
all downstream datasets, which also suggests the significance of our feasibility.
### Results of Pre-Training Data Selection
Given the downstream data, a collection of pre-training dataset candidates and a selection budget (_i.e._, the number of datasets selected for pre-training) due to limited resources, we aim to select the pre-training data with the highest feasibility, so as to benefit the downstream performance.
**Setup.** We here adopt two settings, _i.e._, selection budget is set as 2 and 3 respectively. The datasets that are augmented for more pre-training data choices in Section 6.2 can be directly used as the candidates of pre-training datasets here. Then, the selected pre-training data serves as the input of graph pre-training model. For node classification tasks, we adopt GCC as the pre-training model as an example, because it is the pre-training model that can be generalized across domains and most of the datasets used for node classification are taken from it (Zhou et al., 2019). For graph classification tasks, we take GraphCL as the pre-training model as it provides multiple graph augmentation approaches and is more general (Zhou et al., 2019).
**Results.** Table 3 shows the results of pre-training data selection on node classification task. (The results on graph classification is included in Appendix C.2). We have the following observations. (1) We can see that the pre-training data selected by W2PGNN ranks first, which is the most suitable one for downstream. (2) We find that sometimes simple graph property like clustering coefficient serves as a good choice on a specific dataset (_i.e._, H-index), when the budget of pre-training data is 2. It is because that H-index exhibits the largest clustering coefficient compared to other downstream datasets (see Table 4), which facilitates the data selection via clustering coefficient. However, such simple graph property is only applicable when the downstream dataset shows a strong indicator of the property, and is not helpful when you need to select more datasets for pre-training (see results under _N=3_). (3) Moreover, it is also interesting to see that using all pre-training data for pre-training is not always a reliable choice. We find that carefully selecting pre-training data can not only benefit downstream performance but also reduce computation resources.
## 7. Conclusion
This paper proposes a W2PGNN framework to answer the question of _when to pre-train_ GNNs based on the generative mechanisms from pre-training to downstream data. W2PGNN designs a graph-based graph generator to summarize the knowledge in pre-training data, and the generator can in turn produce the solution space of downstream data that can benefit from the pre-training. W2PGNN is theoretically and empirically shown to have great potential to provide the application scope of graph pre-training models, estimate the feasibility of pre-training and help select pre-training data.
|
2308.10373 | HoSNN: Adversarially-Robust Homeostatic Spiking Neural Networks with
Adaptive Firing Thresholds | While spiking neural networks (SNNs) offer a promising neurally-inspired
model of computation, they are vulnerable to adversarial attacks. We present
the first study that draws inspiration from neural homeostasis to design a
threshold-adapting leaky integrate-and-fire (TA-LIF) neuron model and utilize
TA-LIF neurons to construct the adversarially robust homeostatic SNNs (HoSNNs)
for improved robustness. The TA-LIF model incorporates a self-stabilizing
dynamic thresholding mechanism, offering a local feedback control solution to
the minimization of each neuron's membrane potential error caused by
adversarial disturbance. Theoretical analysis demonstrates favorable dynamic
properties of TA-LIF neurons in terms of the bounded-input bounded-output
stability and suppressed time growth of membrane potential error, underscoring
their superior robustness compared with the standard LIF neurons. When trained
with weak FGSM attacks (attack budget = 2/255) and tested with much stronger
PGD attacks (attack budget = 8/255), our HoSNNs significantly improve model
accuracy on several datasets: from 30.54% to 74.91% on FashionMNIST, from 0.44%
to 35.06% on SVHN, from 0.56% to 42.63% on CIFAR10, from 0.04% to 16.66% on
CIFAR100, over the conventional LIF-based SNNs. | Hejia Geng, Peng Li | 2023-08-20T21:47:54Z | http://arxiv.org/abs/2308.10373v3 | # HoSNN: Adversarially-Robust Homeostatic Spiking Neural Networks with Adaptive Firing Thresholds
###### Abstract
Spiking neural networks (SNNs) offer promise for efficient and powerful neurally inspired computation. Common to other types of neural networks, however, SNNs face the severe issue of vulnerability to adversarial attacks. We present the first study that draws inspiration from neural homeostasis to develop a bio-inspired solution that counters the susceptibilities of SNNs to adversarial onslaughts. At the heart of our approach is a novel threshold-adapting leaky integrate-and-fire (TA-LIF) neuron model, which we adopt to construct the proposed adversarially robust homeostatic SNN (HoSNN). Distinct from traditional LIF models, our TA-LIF model incorporates a self-stabilizing dynamic thresholding mechanism, curtailing adversarial noise propagation and safeguarding the robustness of HoSNNs in an unsupervised manner. Theoretical analysis is presented to shed light on the stability and convergence properties of the TA-LIF neurons, underscoring their superior dynamic robustness under input distributional shifts over traditional LIF neurons. Remarkably, without explicit adversarial training, our HoSNNs demonstrate inherent robustness on CIFAR-10, with accuracy improvements to 72.6% and 54.19% against FGSM and PGD attacks, up from 20.97% and 0.6%, respectively. Furthermore, with minimal FGSM adversarial training, our HoSNNs surpass previous models by 29.99% under FGSM and 47.83% under PGD attacks on CIFAR-10. Our findings offer a new perspective on harnessing biological principles for bolstering SNNs adversarial robustness and defense, paving the way to more resilient neuromorphic computing.
## Introduction
Neural networks have demonstrated remarkable capabilities across various tasks, yet they are notoriously susceptible to adversarial attacks. These attacks perturb the input data subtly, leading well-trained models to misclassify while the perturbations remain virtually invisible to human observers. This vulnerability, initially discovered in the early works of [23], threatens the reliability of neural networks as they are increasingly deployed in safety-critical applications such as autonomous vehicles and medical imaging. Addressing this issue not only ensures trustworthiness in AI systems but also has remained a vital research focus in contemporary machine learning.
Over the years, significant strides have been made in the field of adversarial robustness to generate adversarial examples, including the Fast Gradient Sign Method (FGSM) [13], DeepFool [17], and Projected Gradient Descent [1]. Among various defense strategies, adversarial training has proven effective. Yet, it is not without limitations, as it graphes with high computational costs and challenges in migrating between different attack paradigms. This complexity underlines the need for novel perspectives and methods to improve robustness in neural networks.
SNNs simulate the way biological neurons convey information via spikes. Originating in the early 1990s, SNNs research introduced models such as the Spike Response Model (SRM) and the Leaky Integrate-and-Fire (LIF) model [1]. The focus in SNNs has been on learning algorithms, architectures, and spike coding. With industry players like IBM and Intel delving into SNNs hardware, innovations such as IBM's TrueNorth [1] and Intel's Loihi [1] have emerged. These devices exploit SNNs' efficiency, highlighting their potential for neuromorphic computing and advancing our grasp of biological neural mechanisms [10].
However, the question of robustness in SNNs remains underexplored. Despite sharing vulnerabilities to adversarial attacks with their non-spiking counterparts [20, 1], SNNs pose an intriguing contradiction. In nature, biological neural systems, which SNNs aim to emulate, demonstrate resilience against a variety of perturbations and noise, rarely exhibiting the susceptibilities seen in artificial networks [11]. This disparity in SNNs prompts a reassessment of their design and principles, suggesting that enhancing robustness not only defends against adversarial attacks but also narrows the gap between artificial and biological neural systems, deepening our understanding of both domains [14]. Meanwhile, current research on SNNs robustness often neglects the biological interpretability of SNNs and the dynamics of LIF neurons, facing issues such as high computational costs and low transferability.
In light of such dilemmas, we ask: Why are biological nervous systems impervious to adversarial noise? How do homeostatic mechanisms contribute to the robustness of net
work activity? Can we exploit the dynamics of SNNs to suppress the propagation of adversarial noise by homeostatic mechanisms? Our investigation, inspired by the inherent robustness of biological neural systems against adversarial perturbations, aims to address these questions. The key contributions of our work can be summarized as :
* We first connect the biological homeostasis mechanism and adversarial robustness in SNNs, introducing an efficient and effective bio-inspired solution to address the vulnerabilities of SNNs under adversarial attacks.
* We introduce the Neural Dynamic Signature (NDS), which encapsulates the activation degree of specific semantic information represented by each neuron and serves as an anchor signal to stabilize neuron behaviors against distributional shifts.
* We design a novel threshold-adapting leaky integrate-and-fire (TA-LIF) neuron model, which incorporates a self-stabilizing thresholding mechanism. Theoretical analysis confirms that our model can suppress the propagation of out-of-distributional noise in an unsupervised way.
* Building upon TA-LIF neurons, we introduce the inherently adversarially robust Homeostatic SNN (HoSNN) and demonstrate that its training algorithm can seamlessly integrate with various SNN learning methods, without significant computational overhead.
* Our experiments demonstrate that HoSNNs consistently exhibit inherent robustness in image classification tasks even without adversarial training, and further amplify the benefits of adversarial training when applied. Notably, our results either surpass or closely align with state-of-the-art benchmarks.
## Related Work
### Adversarial Attacks
Adversarial perturbations continue to pose a significant challenge in deep learning. By exploiting the intricacies of high-dimensional data and non-linear mappings in a model, these subtle but well-crafted alterations can cause drastic changes to model predictions. Such attacks extend beyond the realm of theoretical interest, posing genuine risks in practical applications ranging from autonomous vehicles to medical imaging Akhtar and Mian (2018).
Two notable adversarial attack techniques are the Fast Gradient Sign Method (FGSM) Goodfellow et al. (2014) and the Projected Gradient Descent (PGD) Madry et al. (2017). FGSM uses the loss gradients concerning input data to quickly produce adversarial samples. Let \(x\) be the original input, \(y\) the true label, \(J(\theta,x,y)\) the loss function with network parameters \(\theta\), and \(\epsilon\) a small perturbation magnitude. The FGSM perturbed input \(x^{\prime}\) is:
\[x^{\prime}=x+\epsilon\cdot sign(\nabla_{x}J(\theta,x,y)) \tag{1}\]
PGD is essentially an iterative FGSM, modifies data over iterations. With \(x_{n}\) as the perturbed input in the n-th iteration and \(\alpha\) as the step size, PGD updates by:
\[x_{n+1}=Proj_{x+\epsilon}(x_{n}+\alpha\cdot sign(\nabla_{x}J(\theta,x_{n},y))) \tag{2}\]
More gradient-based attacks work in a similar fashion, such as the RFGSM Tramer et al. (2017), a randomized version of FGSM and the Basic Iterative Method (BIM) Kurakin et al. (2018) as another iterative attack.
Adversarial attacks extend beyond the core concepts of gradient-based methods. A wide array of techniques with varied objectives exists, underlining the pressing need for advances in adversarial robustness research Biggio and Roli (2018). A significant area of concern is the rise of black-box attacks. These occur when an attacker, with limited model knowledge, successfully crafts adversarial examples using transferability. This phenomenon, where adversarial samples created for one model compromise another, underscores the profound vulnerabilities of neural networks and presents significant security concerns Papernot et al. (2016).
### Defense Methods
To combat adversarial threats, researchers have explored a plethora of defense strategies. Adversarial training focuses on retraining the model using a mixture of clean and adversarial examples Madry et al. (2017). Moreover, the Randomization technique introduces stochasticity during the inference phase to prevent precise adversarial attacks Xie et al. (2017). The Projection technique can revert these inputs back to a safer set, neutralizing the effects of the perturbations Mustafa et al. (2019). Lastly, the Detection strategy serves as a safeguard by identifying and responding to altered inputs Metzen et al. (2017).
While these defense methods offer an added layer of protection against adversarial attacks, each has its limitations Akhtar and Mian (2018). Adversarial Training, despite its effectiveness, is computationally expensive, can result in loss of accuracy on clean inputs, and is often difficult to transfer between different models. Randomization, projection, and detection strategies, on the other hand, don't fundamentally address the inherent vulnerabilities of neural networks. Thus, despite these methods providing incremental improvements to the robustness of a model, no single technique offers universal effectiveness.
Moreover, in the context of SNNs, which are valued for their biological plausibility, these methods are generally not considered biologically feasible. Thus, there's an urgent need for more efficient and biologically plausible solutions to enhance the robustness of neural networks, particularly for SNNs.
### Spiking Neural Networks Robustness
With the rise in popularity of SNNs for real-world applications, from odor recognition Imam and Cleland (2020) to autonomous driving Pei et al. (2019) the robustness of these networks has become a significant concern. Empirical studies have demonstrated that SNNs, analogous to artificial neural networks (ANNs), exhibit similar susceptibilities to adversarial attacks Sharmin et al. (2019); Ding et al. (2022).
A line of inquiry in this field has explored porting defensive strategies from ANNs to SNNs. For instance, Kundu et al. (2021) proposed an SNNs training algorithm that enhances resilience by jointly optimizing thresh
olds and weights. [4] proposed a method of adversarial training enhanced by a Lipschitz constant regularizer. [10] established the boundaries for LIF neurons and designed corresponding certified training methods in SNNs. However, these works lack biological plausibility, a defining characteristic of SNNs. Additionally, they seem to import the shortcomings of ANNs defensive methods into SNNs, including expensive computational cost and transferability of defense methods between different attacks
Another research direction has focused on studying the inherent robustness of SNNs, examining factors such as agent gradient algorithms, encoding methods, and hyperparameters. For example, [11] recognized the inherent resistance of SNNs to gradient-based adversarial attacks. [1] investigated the impact of internal structural parameters, such as firing voltage thresholds and time window boundaries. [15] underscore the robustness of the LIF model, particularly its noise-filtering capability. [13] explored the influence of network inter-layer sparsity and [16] examined the effects of surrogate gradient techniques on white-box attacks. Despite these advances, most studies fail to present effective SNNs defense strategies. More importantly, two key facts were ignored: first, the biological neural systems that SNNs aim to simulate rarely exhibit adversarial vulnerability; while little attention has been paid to the link between homeostatic mechanisms and network robustness. Second, the dynamical characteristics of LIF neurons, a crucial difference between SNNs and ANNs, have not been thoroughly examined, particularly their stability under disturbances.
These observations motivate our work, aiming to simulate the homeostasis of the biological nervous system at the level of single neurons to develop a naturally robust SNN.
## Methods
We initiate our exploration by examining the dynamic activations of individual neurons, which we coin as the "Neural Dynamic Signature" (NDS). Drawing insights from the principle of biological homeostasis, we introduce and rigorously analyze a threshold-adapting leaky integrate-and-fire model (TA-LIF) with an objective of stabilizing neuronal dynamics. Subsequently, we delineate a variant of SNNs that incorporates TA-LIF neurons, referred to as homeostatic SNNs (HoSNNs).
### Neural Dynamic Signature (NDS)
Let \(\mathcal{D}\) represent the in-domain input data distribution with samples \(x\) possessing corresponding labels \(l\) used to train a SNN model with parameters \(\theta_{\mathcal{D}}\), which yields a prediction \(y(x|\theta_{\mathcal{D}})\) for \(x\). Upon introduction of adversarial noise, denoted by \(\delta x\), a perturbed sample \(x+\delta x\) emerges from an altered distribution \(\mathcal{D}^{\prime}\), resulting in a divergent prediction \(y^{\prime}(x+\delta x|\theta_{\mathcal{D}})\). Such crafted perturbations typically misguide the network, predominantly influencing neurons encapsulating pivotal semantic information [13]. This perturbation magnifies neuronal activity discrepancies through cascading layer interactions, culminating in a stark divergence between \(y\) and \(y^{\prime}\). The subtle shift from \(\mathcal{D}\) to \(\mathcal{D}^{\prime}\) jeopardizes model accuracy, rendering the finely-tuned parameter \(\theta_{\mathcal{D}}\) suboptimal or even completely ineffective [1, 1, 16, 17, 18].
In conventional ANNs, a neuron's output typically signifies the activation degree of its semantic representation, although such distributed encoding often eludes straightforward human interpretation [10]. Unlike ANNs, SNNs operate in a spatiotemporal manner over a richer set of variables including the time-indexed membrane potential \(u_{i}(t)\) for each neuron \(i\), which is a key state variable and dictates neuron \(i\)'s output spike trains. Under the context of learning, we take a step forward by using the membrane potential \(u_{i}(t)\) series as a nuanced representation of the semantic vector, with the meaning of the activation intensity of the semantic information it represents.
Although the membrane potential \(u_{i}(t)\) of neuron \(i\) might exhibit variability across individual samples \(x\), its expected value over distribution \(\mathcal{D}\), denoted by \(\mathbb{E}_{x\sim\mathcal{D}}[u_{i}(t|x)]\), serves as a reliable metric, encapsulating the averaged semantic activation over \(\mathcal{D}\). As adversarial perturbations induce input distributional shifts, leading to anomalous activation or suppression of out-of-distributional semantics, this expected value offers an anchor signal, facilitating the identification of neuronal activation aberrations and bolstering network resilience.
For a given data instance \(x\) sampled from distribution \(\mathcal{D}\), the Neural Dynamic Signature (NDS) of neuron \(i\) contingent upon network parameters \(\theta\) and the training set distribution \(\mathcal{D}\) is articulated as a temporal series vector \(\boldsymbol{\alpha_{i}}(\theta,\mathcal{D})\), which at time \(t\) has value :
\[\boldsymbol{\alpha_{i}}(t_{k}|\theta,\mathcal{D})=\mathbb{E}_{x\sim\mathcal{D}} [u_{i}(t|\theta,x)],\text{for}\;t\in[0,T] \tag{3}\]
Drawing parallels with the Electroencephalogram (EEG) in biological systems, we conceive the network-level NDS, \(\mathcal{A}_{\text{NET}}(\theta,\mathcal{D})\), as a set of individual neuron NDS vectors across the entirety of the SNN. Formally, the network-level NDS with a set of \(N\) neurons is expressed as:
\[\mathcal{A}_{\text{NET}}(\theta,\mathcal{D})=\{\boldsymbol{\alpha}_{i}(\theta,\mathcal{D})\}_{i=1}^{N} \tag{4}\]
Densely populated SNNs have \(O(N^{2})\) weight parameters. In contrast, \(\mathcal{A}_{\text{NET}}(\theta,\mathcal{D})\) exhibits a scaling of \(O(NT)\). Recent strides in SNNs algorithms, curbing \(T\), the number of time steps over which the SNNs operate, to values such as 5 or 10 [11]. Thus, the computational and storage overhead of the NDS remains well tractable.
### Threshold-Adapting Leaky Integrate-and-Fire Spiking Neural Model (TA-LIF)
We proposed a Threshold-Adapting Leaky Integrate-and-Fire Spiking Neural Model (TA-LIF) incorporating a novel homeostatic self-stabilizing mechanism based on the neuron level NDS, ensuring that out-of-distributional input semantics do not excessively trigger abnormal neuron activity. We describe how our generally applicable self-stabilizing mechanism can be integrated with the LIF model with first order
synapses.The dynamics of the LIF neuron \(i\) at time \(t\) is described by the membrane potential \(u_{i}(t)\) and governed by the following differential equation:
\[\tau_{m}\frac{du_{i}(t)}{dt}=-u_{i}(t)+I(t)-\tau_{m}s_{i}(t)V_{th}^{i}(t) \tag{5}\]
where: \(\tau_{m}\) is the membrane time constant; \(I(t)\) represents the input and is defined as the sum of the pre-synaptic currents: \(I(t)=R\ \sum_{j}w_{ij}a_{j}(t)\); \(w_{ij}\) represents the synaptic weight from neuron \(j\) to neuron \(i\); \(a_{j}(t)\) is the post-synaptic current induced by neuron \(j\) at time \(t\); \(V_{th}^{i}(t)\) is the firing threshold of neuron \(i\) at time \(t\). Neuron \(i\)'s postsynaptic spike train is:
\[s_{i}(t)=\begin{cases}+\infty&\text{if }u_{i}(t)\geq V_{th}^{i}(t)\\ 0&\text{otherwise}\end{cases} \tag{6}\]
which can also be expressed as the sum of the Dirac functions in terms of the postsynaptic spike times \(t_{i}^{f}\):
\[s_{i}(t)=\sum_{f}\delta(t-t_{i}^{f}) \tag{7}\]
With \(\tau_{s}\) denoting the synaptic time constant, the evolution of the generated postsynaptic current (PSC) \(a_{j}(t)\) is described by:
\[\tau_{s}\frac{da_{j}(t)}{dt}=-a_{j}(t)+s_{j}(t) \tag{8}\]
#### Threshold-Adapting Mechanism
In the traditional Leaky Integrate-and-Fire (LIF) model, the threshold \(V_{th}\) is retained as a constant. Nevertheless, certain generalized LIF (GLIF) models have incorporated dynamic adjustments to \(V_{th}\) through homeostatic mechanisms, mirroring observations in biological systems [1]. Notably, homeostatic mechanisms are imperative for stabilizing circuit functionality and modulating both intrinsic excitability and synaptic strength [13]. Several prevalent GLIF models are characterized by a tunable firing threshold with short-term memory, which increases with every emitted output spike and subsequently decays exponentially back to the foundational threshold [1, 1]. However, such mechanisms often do not equip neurons with the requisite information to discern between "normal" and aberrant neural activity. There is no prior study leveraging homeostasis explicitly for adversarial robustness.
The key distinctions of the proposed homeostasis are exploration of the neuron-level Neural Dynamic Signature (NDS) as the anchor signal for threshold regulation and incorporation of an error signal defined based upon the NDS into the dynamics of the firing threshold. Concretely, for neuron \(i\) in a HoSNN parameterized by weights \(\theta\), if the membrane voltage \(u_{i}(t|\theta,x)\) arises due to a canonical input \(x\) sampled from distribution \(\mathcal{D}\), and \(u_{i}(t|\theta,x^{\prime})\) arises from an adversarial input \(x^{\prime}=x+\delta x\) from distribution \(\mathcal{D}^{\prime}\), our objective is to ensure that the resulting output firing activities are homogenized by modulating the threshold. The intuition guiding this is straightforward: if a neuron undergoes abnormal activation, the threshold should be heightened to restrain spike output. Conversely, if a neuron experiences abnormal inhibition, the threshold should be lowered to promote spiking output. We hence utilize the divergence between the current membrane potential and an NDS from a proficiently trained LIF-SNN with weights \(\theta^{*}\) under identical network configurations to optimally acquire precise semantic information from \(\mathcal{D}\). This discrepancy is formalized as an error:
\[e_{i}(t|\theta,x^{\prime}):=u_{i}(t|\theta,x^{\prime})-\alpha_{i}(t|\theta^{*},\mathcal{D}) \tag{9}\]
Following this, the neuron \(i\)'s threshold for the input sample \(x^{\prime}\) is adjusted based on this differential:
\[\tau_{v}^{i}\frac{dV_{th}^{i}(t|\theta,x^{\prime})}{dt}=e_{i}(t|\theta,x^{ \prime}), \tag{10}\]
where \(\tau_{v}^{i}\) denotes the adaptive time constant pertinent to neuron \(i\). It is crucial to note that each neuron autonomously maintains its unique \(\tau_{v}^{i}\) and \(V_{th}^{i}(t)\). Equations (20), (21), (23), (24), and (25) collaboratively define our TA-LIF model. Specifically, \(V_{th}^{i}(t)\) undergoes updates at each timestep in an unsupervised fashion during every forward pass, while \(\tau_{v}^{i}\) can be adaptively learned during the backward pass, as delineated in subsequent sections.
Figure 0(a) illustrates the TA-LIF model. In Figure 0(b), upon receiving a typical input, an LIF neuron outputs spikes and accumulates its postsynaptic current when the membrane potential crosses the constant firing threshold. In Figure 0(b) the LIF neuron doubles its firing rate due to an increase in its input. In contrast, Figure 0(b) showcases that even in the presence of the same input perturbation, the ALIF neuron equipped with a dynamic threshold can ensure a firing rate and PSC generation akin to the scenario of the uncorrupted input.
Figure 1: Illustration of TA-LIF neurons
### Dynamic Properties of TA-LIF
To make the analysis of spiking dynamics tractable, we make use of simplifying assumptions to derive several dynamic properties of the proposed TA-LIF neurons to shed light on their adversarial robustness. The more complete derivations of these properties can be found in the Appendix.
Differentiating (20) and substituting in (25), and comparing the dynamics of the LIF and TA-LIF models leads to a second-order dynamical equation which provides a basis for understanding TA-LIF neurons' stability and convergence properties:
\[\tau_{m}\frac{d^{2}e_{i}(t)}{dt^{2}}+\frac{de_{i}(t)}{dt}+r\frac{\tau_{m}}{ \tau_{v}^{z}}e_{i}(t)=\frac{d\Delta I_{i}(t)}{dt}, \tag{11}\]
where, for notation convenience \(e_{i}(t|\theta,x^{\prime})\) is replaced by \(e_{i}(t)\) and signifies the differential between the membrane potential of neuron \(i\) and its Neural Dynamic Signature (NDS) at time \(t\) when sample \(x^{\prime}\) is received per (24); \(\Delta I_{i}(t):=I_{i}(t|\theta,x^{\prime})-I_{i}^{*}(t|\theta^{*},D)\), \(I_{i}^{*}(t|\theta^{*},D):=\mathbb{E}_{x\sim\mathcal{D}}[I_{i}(t|\theta^{*},x)]\), and \(I_{i}(t)\) and \(I_{i}^{*}(t|\theta^{*},D)\) represent the received current input to the TA-LIF neuron under the analysis, and the mean input current to the corresponding LIF neuron \(i\) governed by the training data distribution \(\mathcal{D}\), respectively; \(r\) is the firing rate of the TA-LIF neuron. Notably, the term \(r\frac{\tau_{m}}{\tau_{v}}e_{i}(t)\) in (26) of the designed TA-LIF dynamics confines \(e_{i}(t)\) amid input perturbations, a key component not present in LIF neurons. We present two TA-LIF dynamic properties as follows.
BIBO StabilityWe first show the BIBO (Bounded Input, Bounded Output) stability [1] of TA-LIF neurons based on (26). The characteristic equation of (26) of non-silent (\(r>0\)) and non-degenerating (\(\tau_{m},\tau_{v}>0\)) TA-LIF neurons and its roots are:
\[\tau_{m}\mathrm{s}^{2}+s+r\frac{\tau_{m}}{\tau_{v}}=0 \tag{12}\]
\[s_{1,2}=\frac{-1\pm\sqrt{\Delta}}{2\tau_{m}},\;\Delta=1-4r\frac{\tau_{m}^{2}} {\tau_{v}} \tag{13}\]
* For \(\Delta>0\): Both roots \(s_{1,2}\) are real and negative.
* For \(\Delta=0\): There's a single negative real root.
* For \(\Delta<0\): Both roots are complex with negative real parts.
For a second-order system to be BIBO, the roots of its characteristic equation must be a negative real or have a negative parts, which is clearly the case for the TA-LIF model under the above three situations, affirming the BIBO stability of (26). The BIBO stability signifies that with the bounded driving input to system (26), the deviation of the TA-LIF neuron's membrane potential from its targeted NDS is also bounded, demonstrating the well control of the growth of error \(e_{i}(t)\).
Under white noiseTo shed more light on the dynamic characteristics of TA-LIF, we follow the common practice [1, 1, 2, 3] to approximate \(\Delta I(t)\) as a Wiener process, which effectively represents small, independent, and random perturbations. Consequently, the driving force on the right of (26) can be approximated by white noise \(\xi(t)\) with zero mean and a variance \(\sigma^{2}\). By the theory of stochastic differential equations [1], this leads to:
\[\frac{d\Delta I(t)}{dt}\sim\xi(t)\implies\left\{\begin{array}{rl}&\langle e _{i}^{2}(t)\rangle_{LIF}=O(\sigma^{2}t)\\ &\langle e_{i}^{2}(t)\rangle_{TA-LIF}=O(\sigma^{2})\end{array}\right. \tag{14}\]
Importantly, the mean square error \(\langle e_{i}^{2}(t)\rangle_{TA-LIF}\) of the TA-LIF neuron is bounded to \(O(\sigma^{2})\) and does not grow with time. In stark contrast, subjecting to the same input perturbation, the corresponding LIF neuron's the mean square error \(\langle e_{i}^{2}(t)\rangle_{LIF}\) may grown unbounded with time, revealing its potential vulnerability to adversarial attacks.
### Homeostatic SNNs (HoSNNs)
We introduce the homeostatic SNNs (HoSNNs), which deploy TA-LIF neurons as the basic compute units to leverage their noise immunity as shown in Figure 2. Architecturally, HoSNNs can be constructed by adopting typical connectivity such as feedforward dense or convolutional layers.
In addition to synaptic weights \(\mathbf{\theta}\), the learnable parameters of a HoSNN also include a time-invariant distinct \(\tau_{v}\) for each neuron, which specifies the dynamics of the time-varying firing threshold per (25). Given the NDS \(\mathcal{A}_{\text{NET}}(\mathbf{\theta}^{*},\mathcal{D})\) defined in (19), the HoSNN's optimization problem can be described as:
\[\underset{\mathbf{\theta},\mathbf{\tau}_{v}}{\text{min}}\mathbb{E}_{(x,y)\sim \mathcal{T}}[\mathcal{L}\left(y,f\left(x;\mathbf{\theta},\mathbf{\tau}_{v}|\mathcal{A} _{\text{NET}}(\mathbf{\theta}^{*},\mathcal{D})\right)\right)] \tag{15}\]
where \(x\) and \(y\) are an input/label pair sampled from a training distribution \(\mathcal{T}\), \(f(x)\) is the HoSNN output, \(\mathcal{L}(y,f(x))\) is the training loss. In practice, the training data can be chosen to include the clean dataset, an adversarial example dataset, or a combination of the two.
The network-level NDS \(\mathcal{A}_{\text{NET}}\), acting as anchor signals for all TA-LIF neurons, is acquired from a separately well-trained LIF based SNN with the same architecture on \(\mathcal{D}\) to capture the semantic information.
In principle, a backpropagation based training algorithm such as BPTT [1, 1, 1], the learning data can be chosen to include the clean dataset, an adversarial example dataset, or a combination of the two.
The network-level NDS \(\mathcal{A}_{\text{NET}}\), acting as anchor signals for all TA-LIF neurons, is acquired from a separately well-trained LIF based SNN with the same architecture on \(\mathcal{D}\) to capture the semantic information.
In principle, a backpropagation based training algorithm such as BPTT [1, 1, 1], the learning data can be chosen to include the clean dataset, an adversarial example dataset, or a combination of the two.
Figure 2: Illustration of the HoSNN
2018), BPTR (Lee et al., 2020), or TSSL-BP (Zhang and Li, 2020) can be applied to optimize the network based on (15), during which each \(\tau_{v}\) is constrained to be non-negative. Optimizing the firing threshold time constants \(\mathbf{\tau_{v}}\) of all TA-LIF neurons adds insignificant overhead given the dominance of the weight parameter count.
## Experiments
In this section, we compare the adversarial robustness of HoSNN with traditional LIF SNN. We train both networks on the original dataset and on the low-intensity fgsm adversarial training respectively, and evaluate their robustness. The former evaluates the intrinsic robustness of the network, and the latter evaluates the transferability and scalability after adversarial training. We further conduct an ablation study on the key parameter \(\tau_{v}\) to clarify the underlying mechanism of HoSNN.
### Experimental Setup
We evaluated our approach across three benchmark datasets: MNIST (LeCun, Cortes, and Burges, 1998), CIFAR10 and CIFAR100 (Krizhevsky, 2009). For our experiments, we use LeNet (LeCun et al., 1998) for MNIST, and VGG9-like (Simonyan and Zisserman, 2015) architectures for CIFAR10 and CIFAR100.
For robustness assessment, we incorporated different attack methodologies, including FGSM, RFGSM, PGD, and BIM. The key parameters for these attacks are as follows: for both CIFAR10 and CIFAR100, the attack budget is set to \(\epsilon=8/255\). For iterative attacks such as PGD, we adopted parameters \(\alpha=0.01\) and \(steps=7\), in accordance with (Ding et al., 2022). Regarding adversarial training, we use FGSM adversarial training with \(\epsilon\) of 2/255 on CIFAR10 as (Ding et al., 2022) and 4/255 on CIFAR100 as (Kundu, Pedram, and Beerel, 2021).
For all HoSNN experiments, we initially trained an LIF SNN with an identical architecture on the corresponding clean dataset to derive the NDS. Both HoSNN and SNN employed the BPTT learning algorithm (\(T=5\)), leveraging a sigmoid surrogate gradient (Xu et al., 2022; Neftci, Mostafa, and Zenke, 2019; Wu et al., 2018). The learning rate for \(\tau_{v}\) was set at 1/10 of the learning rate designated for weights, ensuring hyperparameter stability during training. Additionally, we ensured \(\tau_{v}\) is non-negative during optimization to allow degradation from TA-LIF to LIF. For brevity, in all tables, HoSNNs are labeled as "Ho", adversarial training as "Adv", CIFAR-10 as "C-10", and CIFAR-100 as "C-100". Comprehensive settings are detailed in the appendix.
Black Box AttackIn this section, we evaluate the robustness of HoSNN against black-box attacks. Both SNN and HoSNN are trained using low-intensity FGSM adversarial training. We employ a separately trained SNN with an identical architecture to generate white-box attack samples. The results in Table 2 shows for the CIFAR-10, when trained with low-strength FGSM, HoSNNs exhibit significant resistance against adversarial attacks, outperforming the traditional SNNs in FGSM and PGD scenarios with improvements of 16.57% and 27.92% respectively. Likewise, on CIFAR-100, HoSNN shows a similar but weaker robustness improvement. Such robustness underscores the effective incorporation of the homeostatic mechanism into SNNs, making them considerably more robust against adversarial intrusions in black-box scenarios.
Comparison with Other WorksTo assess the effectiveness of our method, we compare it against recent state-of-the-art approaches datasets under white box FGSM and PGD attacks on CIFAR-10 and CIFAR-100 in Table 3. Remarkably, even without adversarial training, our method achieved a clean accuracy of \(91.62\%\), FGSM defense accuracy of \(72.60\%\), and PGD defense accuracy of \(54.19\%\) on CIFAR-10. These results surpass those of other methods that employ adversarial training. With adversarial training incorporated, our model further enhanced its robustness. Especially on CIFAR-100, our adversarially trained model achieved the highest defense accuracies of \(27.18\%\) and \(18.47\%\) for FGSM and PGD, respectively.
### Ablation studies on \(\tau_{v}\)
To better understand our model, we conducted a further analysis on \(\tau_{v}\). This helps elucidate the distinct features of TA-LIFs and HoSNNs. Figure 3(a) display the accuracy under PGD attacks of three HoSNNs with three distinct initial \(\tau_{v}\) values. As anticipated, a smaller initial \(\tau_{v}\) value, indicating of greater noise filtering capability, leads to superior adversarial robustness.
To gain insight into the noise suppression mechanism of HoSNNs, we evaluated the offset of each layer's postsynaptic current (PSC) as illustrated in Figure 3(b). For each neuron \(i\) in layer \(l\), we computed its mean PSC sequence \(\overline{a_{i}(t)}\) using the clean CIFAR10 and its mean PSC under the white-box PGD attack as \(\overline{a_{i}^{\prime}(t)}\). The relative offset of layer \(l\) can be quantified by: \(\frac{\sum_{i,t}|\overline{a_{i}(t)}-\overline{a_{i}^{\prime}(t)}|}{\sum_{i,t} |\overline{a_{i}(t)}|}\)
In Figure 3(b), as we expected, standard SNNs under PGD attacks amplify minor perturbations through convolutional layers to fully connected layers, causing output misclassifications. For HoSNN, error signals are magnified in early convolutional layers in a similar magnitude, but as we progress deeper, the TA-LIF's neuronal activity offset decreases, minimizing by the penultimate layer and reducing classification errors. This behavior aligns with common understanding of CNNs [11]: the initial convolutional layers handle low-level features where individual neurons might fail to discern between adversarial noise and standard input. As the depth increases, high-level semantic features emerge, and TA-LIF becomes instrumental in filtering out deviant semantic information. It's worth noting that while different \(\tau_{v}\) values effectively reduce the neuronal activity offset in the penultimate layer, residual perturbations can still convey potent attack information as shown in the curve \(\tau_{v}^{init}=20\). This observation is consistent with prior research [10], which posits that certain adversarial signals, termed "non-robust" features, arise from inherent patterns in the data distribution, rendering them challenging to distinguish.
## Discussions
Inspired by biological homeostasis, we designed the TA-LIF neuron with a threshold adaptation mechanism and introduced HoSNN, an SNN that is inherently robust and further enhances the effectiveness of adversarial training, achieving state-of-the-art robustness. Yet, we still face challenges with HoSNNs such as increased computational expenses, reduced clean accuracy, and remaining susceptibilities to adversarial attacks. Meanwhile, we recognize the vast, yet untapped, potential of biological homeostasis in neural network research. In particular, the relationship between the properties of individual neurons and the overall performance of the network warrants further exploration. While our work paves the way, there is still much territory to be explored.
\begin{table}
\begin{tabular}{c c c c c c c} \hline Data & Ho & Clean & FGSM & RFGSM & PGD & BIM \\ \hline C- & \(\times\) & 91.85 & 54.59 & 72.17 & 47.08 & 44.42 \\
10 & ✓ & 90.30 & **71.16** & **81.44** & **75.00** & **73.48** \\ \hline C- & \(\times\) & 68.72 & 47.49 & 57.96 & 50.25 & 51.77 \\
100 & ✓ & 65.37 & **52.32** & **58.98** & **54.19** & **55.01** \\ \hline \end{tabular}
\end{table}
Table 2: Black box attack results of SNN and HoSNNs
Figure 4: Network analysis. Three different HoSNNs were trained on CIFAR10 with initial values of \(\tau_{v}\) 5, 10, 20 respectively
\begin{table}
\begin{tabular}{c c c c c} \hline Data & BPTT Attack & Clean & FGSM & PGD \\ \hline \multirow{4}{*}{C-10} & Sharmin et al. [20] & 89.30 & 15.00 & 3.80 \\ & Kundu et al. [20] & 87.50 & 38.00 & 9.10 \\ & Ding et al. [20] & 90.74 & 45.23 & 21.16 \\ & Our work(w/o adv) & **91.62** & **72.60** & **54.19** \\ & Our work(with adv) & 90.30 & **75.22** & **68.99** \\ \hline \multirow{4}{*}{C-100} & Sharmin et al. [20] & 64.40 & 15.50 & 6.30 \\ & Kundu et al. [20] & 65.10 & 22.00 & 7.50 \\ \cline{1-1} & Ding et al. [20] & 70.89 & 25.86 & 10.38 \\ \cline{1-1} & Our work(with adv) & 65.37 & **27.18** & **18.47** \\ \hline \end{tabular}
\end{table}
Table 3: Comparison with others work
**Appendix**
We mainly present the derivation of the second-order dynamic equation of TA-LIF, the detailed experimental setup and more data in the supplementary material.
**Derivation of TA-LIF Dynamic Equation**
In this section, we derive the approximate second-order dynamic equations of the threshold-adapting leaky integrate-and-fire (TA-LIF) neurons and subsequently analyze them.
### LIF Dynamics
To facilitate our discussion, let's commence by presenting the first-order dynamic equations of the LIF neuron \(i\) at time \(t\):
\[\tau_{m}\frac{du_{i}(t)}{dt}=-u_{i}(t)+I_{i}(t)-\tau_{m}s_{i}(t)V_{th} \tag{16}\]
The input current is defined as
\[I_{i}(t)=R\;\sum_{j}w_{ij}a_{j}(t) \tag{17}\]
The spiking behavior \(s_{i}(t)\) is defined as:
\[s_{i}(t)=\begin{cases}+\infty&\text{if }u_{i}(t)\geq V_{th}\\ 0&\text{otherwise}\end{cases}=\sum_{f}\delta(t-t_{i}^{f}) \tag{18}\]
And the post-synaptic current dynamics are given by:
\[\tau_{s}\frac{da_{j}(t)}{dt}=-a_{j}(t)+s_{j}(t) \tag{19}\]
Where:
* \(\tau_{m}\): Represents the membrane time constant.
* \(I_{i}(t)\): Denotes the input, which is the summation of the pre-synaptic currents.
* \(w_{ij}\): Stands for the synaptic weight from neuron \(j\) to neuron \(i\).
* \(a_{j}(t)\): Refers to the post-synaptic current induced by neuron \(j\) at time \(t\).
* \(V_{th}\): Is the static firing threshold.
* \(t_{i}^{f}\): Indicates the \(f\)-th spike time of neuron \(i\).
* \(\tau_{s}\): Is the synaptic time constant.
### Neural Dynamic Signature
Let's begin by reviewing the definition of the Neural Dynamic Signature (NDS). Given a data instance \(x\) sampled from distribution \(\mathcal{D}\), the NDS of neuron \(i\), contingent upon network parameters \(\theta\) and the training set distribution \(\mathcal{D}\), can be represented as a temporal series vector \(\boldsymbol{\alpha_{i}}(\theta,\mathcal{D})\). Specifically, at time \(t\), it holds the value:
\[\boldsymbol{\alpha_{i}}(t|\theta,\mathcal{D})\coloneqq\mathbb{E}_{x\sim \mathcal{D}}[u_{i}(t|\theta,x)],\;\text{for }t\in[0,T] \tag{20}\]
To derive the dynamics of NDS, we start by revisiting Equation (16), rewriting it with respect to \(x,\theta\)
\[\tau_{m}\frac{du_{i}(t|\theta,x)}{dt}=-u_{i}(t|\theta,x)+I_{i}(t|\theta,x)- \tau_{m}s_{i}(t|\theta,x)V_{th} \tag{21}\]
For the convenience of dynamic analysis, we choose to approximate the discontinuous Dirac function term \(s_{i}(t|\theta,x)\) with the average firing rate \(r_{i}(\theta,x)\) of neuron \(i\). The average firing rate is calculated by
\[r_{i}(\theta,x):=\int_{0}^{T}\frac{s_{i}(t|\theta,x)}{T}dt \tag{22}\]
This way, they both have the same integral value over time: the number of neuron firings. Substituting Equation (22) into Equation (21) and computing the expectation on both sides, we have:
\[\tau_{m}\mathbb{E}_{x\sim\mathcal{D}}[\frac{du_{i}(t|\theta,x)}{dt}]=- \mathbb{E}_{x\sim\mathcal{D}}[u_{i}(t|\theta,x)]+\mathbb{E}_{x\sim\mathcal{D} }[I_{i}(t|\theta,x)]-\tau_{m}\mathbb{E}_{x\sim\mathcal{D}}[r_{i}(\theta,x)]V_{th} \tag{23}\]
Here, we denote the average input current of neuron \(i\) over the entire dataset as:
\[I_{i}^{*}(t|\theta,\mathcal{D})\coloneqq\mathbb{E}_{x\sim\mathcal{D}}[I_{i}(t| \theta,x)] \tag{24}\]
and the average spike frequency of neuron \(i\) over the entire dataset as:
\[r_{i}^{*}(\theta,\mathcal{D})\coloneqq\mathbb{E}_{x\sim\mathcal{D}}[r_{i}( \theta,x)] \tag{25}\]
With the definition from Equation (20), the dynamics of NDS can be expressed as:
\[\tau_{m}\frac{d\mathbf{\alpha_{i}}(t|\theta,\mathcal{D})}{dt}=-\mathbf{\alpha_{i}}(t| \theta,\mathcal{D})+I_{i}^{*}(t|\theta,\mathcal{D})-\tau_{m}r_{i}^{*}(\theta, \mathcal{D})V_{th} \tag{26}\]
As mentioned in the main text, we usually expect NDS to have precise semantic information of the distribution \(\mathcal{D}\). So NDS should be obtained through a well-trained model with optimal weight parameter \(\theta^{*}\). For clarity in the following sections, we use \(\theta^{*}\) to represent the actually used NDS:
\[\tau_{m}\frac{d\mathbf{\alpha_{i}}(t|\theta^{*},\mathcal{D})}{dt}=-\mathbf{\alpha_{i}} (t|\theta^{*},\mathcal{D})+I_{i}^{*}(t|\theta^{*},\mathcal{D})-\tau_{m}r_{i}^ {*}(\theta^{*},\mathcal{D})V_{th} \tag{27}\]
### TA-LIF Dynamics
In this section, we delve deeper into the dynamical equations governing the TA-LIF neuron and derive its second-order dynamic equation
\[\tau_{m}\frac{du_{i}(t)}{dt}=-u_{i}(t)+I_{i}(t)-\tau_{m}s_{i}(t)V_{th}^{i}(t) \tag{28}\]
The synaptic input \(I_{i}(t)\), the spike generation function \(s_{i}(t)\) and post-synaptic current dynamics of TA-LIF are defined same as (17)(18)(19). For a specific network parameter \(\theta\) and a sample \(x^{\prime}\) drawn from \(\mathcal{D}^{\prime}\), the dynamic equation governing the threshold \(V_{th}^{i}(t)\) is:
\[\tau_{v}^{i}\frac{dV_{th}^{i}(t|\theta,x^{\prime})}{dt}=e_{i}(t|\theta,x^{ \prime}), \tag{29}\]
where the error signal, utilizing the NDS as given in (20), is defined as:
\[e_{i}(t|\theta,x^{\prime})\coloneqq u_{i}(t|\theta,x^{\prime})-\mathbf{\alpha_{i}} (t|\theta^{*},\mathcal{D}) \tag{30}\]
Applying the continuity approximation for the Dirac function as per (22) and incorporating the conditional dependency of \(\theta\) and \(x^{\prime}\), rewriting the dynamics for TA-LIF (28) as:
\[\tau_{m}\frac{du_{i}(t|\theta,x^{\prime})}{dt}=-u_{i}(t|\theta,x^{\prime})+I_{ i}(t|\theta,x^{\prime})-\tau_{m}r_{i}(\theta,x^{\prime})V_{th}^{i}(t|\theta,x^{ \prime}) \tag{31}\]
Subtracting (27) from (31) and employing (30), and denoting
\[\Delta I_{i}(t|\theta,x^{\prime})\coloneqq I_{i}(t|\theta,x^{\prime})-I_{i}^ {*}(t|\theta^{*},\mathcal{D}) \tag{32}\]
we derive:
\[\tau_{m}\frac{de_{i}(t|\theta,x^{\prime})}{dt}=-e_{i}(t|\theta,x^{\prime})+ \Delta I_{i}(t|\theta,x^{\prime})-\tau_{m}[r_{i}(\theta,x^{\prime})V_{th}^{i} (t|\theta,x^{\prime})-r_{i}^{*}(\theta^{*},\mathcal{D})V_{th}] \tag{33}\]
Differentiating (33) with respect to time and utilizing the threshold dynamics from (30), we obtain:
\[\tau_{m}\frac{d^{2}e_{i}(t|\theta,x^{\prime})}{dt^{2}}=-\frac{de_{i}(t|\theta, x^{\prime})}{dt}+\frac{d\Delta I_{i}(t|\theta,x^{\prime})}{dt}-\frac{\tau_{m}}{ \tau_{v}^{i}}r_{i}(\theta,x^{\prime})e_{i}(t|\theta,x^{\prime}) \tag{34}\]
For succinctness, we will omit dependencies on \(\theta\) and \(x^{\prime}\), resulting in TA-LIF dynamics in the main text (26):
\[\tau_{m}\frac{d^{2}e_{i}(t)}{dt^{2}}+\frac{de_{i}(t)}{dt}+r_{i}\frac{\tau_{m}} {\tau_{v}^{i}}e_{i}(t)=\frac{d\Delta I_{i}(t)}{dt} \tag{35}\]
For the standard LIF neurons where \(\tau_{v}^{i}\rightarrow\infty\), the equation simplifies to:
\[\tau_{m}\frac{d^{2}e_{i}(t)}{dt^{2}}+\frac{de_{i}(t)}{dt}=\frac{d\Delta I_{i}( t)}{dt} \tag{36}\]
### Dynamic Stability Analysis
In this section, we analyze the stability of (35) and (36) to explore the influence of our dynamic threshold mechanism on the noise suppression ability of TA-LIF neuron.
### BIBO Stability of Equation (35)
_Characteristic Equation:_ We first show the BIBO (Bounded Input, Bounded Output) stability (Ogata 2010) of TA-LIF neurons based on (35). The characteristic equation of (35) of non-silent (\(r_{i}>0\)) and non-degenerating (\(\tau_{m},\tau_{v}^{i}>0\)) TA-LIF neurons are:
\[\tau_{m}s^{2}+s+r_{i}\frac{\tau_{m}}{\tau_{v}^{i}}=0 \tag{37}\]
and its roots are
\[s_{1,2}=\frac{-1\pm\sqrt{\Delta}}{2\tau_{m}},\;\Delta=1-4r_{i}\frac{\tau_{m}^ {2}}{\tau_{v}^{i}} \tag{38}\]
* For \(\Delta>0\): Both roots \(s_{1,2}\) are real and negative.
* For \(\Delta=0\): There's a single negative real root.
* For \(\Delta<0\): Both roots are complex with negative real parts.
For a second-order system to be BIBO, the roots of its characteristic equation must be a negative real or have a negative parts, which is clearly the case for the TA-LIF model under the above three situations, affirming the BIBO stability of (35). The BIBO stability signifies that with the bounded driving input to system (35), the deviation of the TA-LIF neuron's membrane potential from its targeted NDS is also bounded, demonstrating the well control of the growth of error \(e_{i}(t)\).
### Stability of Equation (35) Under White Noise
To elucidate the dynamic characteristics of TA-LIF further, we adopt the prevalent method (Abbott and Van Vreeswijk 1993; Brunel 2000; Gerstner et al. 2014; Renart, Brunel, and Wang 2004), approximating \(\Delta I(t)\) with a Wiener process. This approximation effectively represents small, independent, and random perturbations. Hence, the driving force in equation (35) \(\frac{d\Delta I_{i}(t)}{dt}\) can be modeled by a Gaussian white noise \(F(t)\), leading to the well-established Langevin equation in stochastic differential equations theory (Kloeden et al. 1992; Van Kampen 1992; Risken 1996):
\[\frac{d^{2}e_{i}(t)}{dt^{2}}+\frac{1}{\tau_{m}}\frac{de_{i}(t)}{dt}+\frac{r_{ i}}{\tau_{v}^{i}}e_{i}(t)=F(t) \tag{39}\]
Denoting \(\left\langle\cdot\right\rangle\) as averaging over time, \(F(t)\) is a Gaussian white noise with variance \(\sigma^{2}\) which satisfies:
\[\left\{\begin{array}{ll}\left\langle F(t)\right\rangle&=0,\\ \left\langle F\left(t_{1}\right)F\left(t_{2}\right)\right\rangle&=\sigma^{2} \delta\left(t_{1}-t_{2}\right),\\ \left\langle F\left(t_{1}\right)F\left(t_{2}\right)\cdots F\left(t_{2n+1} \right)\right\rangle&=0,\\ \left\langle F\left(t_{1}\right)F\left(t_{2}\right)\cdots F\left(t_{2n} \right)\right\rangle&=\sum_{\text{all pairs}}\left\langle F\left(t_{i}\right)F \left(t_{j}\right)\right\rangle\cdot\left\langle F\left(t_{k}\right)F\left(t _{l}\right)\right\rangle\cdots\end{array}\right. \tag{40}\]
where the sum has to be taken over all the different ways in which one can divide the \(2n\) time points \(t_{1}\cdots t_{2n}\) into \(n\) pairs. Under this assumption (40), the solution of the Langevin equation(39) is (Uhlenbeck and Ornstein 1930; Wang and Uhlenbeck 1945):
\[\left\langle\left[\Delta e_{i}(t)\right]^{2}\right\rangle=\frac{\tau_{m}\tau_ {v}^{i}\sigma^{2}}{2r_{i}}\left[1-e^{\frac{-t}{\tau_{m}}}\left(\cos\left( \omega_{1}t\right)+\frac{\sin\left(\omega_{1}t\right)}{2\omega_{1}\tau_{m}} \right)\right]=O(\sigma^{2}) \tag{41}\]
where \(\Delta e_{i}(t)=e_{i}(t)-\left\langle e_{i}(t)\right\rangle\) and \(\omega_{1}=\sqrt{\frac{r_{i}}{\tau_{v}^{i}}-\frac{1}{4\tau_{m}^{2}}}\). While under the same assumptions (40), equation (36) yields:
\[\left\langle\left[\Delta e_{i}(t)\right]^{2}\right\rangle=\frac{\tau_{m}^{2} \sigma^{2}}{2}\left(t-\tau_{m}+\tau_{m}e^{-t/\tau_{m}}\right)=O(\sigma^{2}t) \tag{42}\]
Obviously, gaussian white noise with zero mean (40) leads \(\left\langle e_{i}(t)\right\rangle=0\), \(\left\langle\left[\Delta e_{i}(t)\right]^{2}\right\rangle=\left\langle e_{i}^ {2}(t)\right\rangle\). Hence,
\[\frac{d\Delta I(t)}{dt}\sim F(t)\implies\left\{\begin{array}{ll}\left\langle e _{i}^{2}(t)\right\rangle_{LIF}=O(\sigma^{2}t)\\ \left\langle e_{i}^{2}(t)\right\rangle_{TA-LIF}=O(\sigma^{2})\end{array}\right. \tag{43}\]
Significantly, the mean square error \(\langle e_{i}^{2}(t)\rangle_{TA-LIF}\) of the TA-LIF neuron remains bounded to \(O(\sigma^{2})\) and doesn't increase over time. In contrast, under identical input perturbations, the mean square error \(\langle e_{i}^{2}(t)\rangle_{LIF}\) of the LIF neuron may grow unbounded with time, highlighting its potential susceptibility to adversarial attacks.
## Experiment Setting Details
Our evaluation encompasses three benchmark datasets: MNIST, CIFAR10, and CIFAR100. For experimental setups, we deploy:
* LeNet (15C5-P-40C5-P-300) for MNIST.
* VGGs (128C3-P-256C3-P-512C3-1024C3-512C3-1024-512) for CIFAR10 and (128C3-P-256C3-P-512C3-1024C3-512C3-1024-1024) for CIFAR100.
Here, the notation 15C3 represents a convolutional layer with 15 filters of size \(5\times 5\), and P stands for a pooling layer using \(2\times 2\) filters. For the CIFAR10 and CIFAR100 datasets, we incorporated Batch normalization layers and dropout mechanisms to mitigate overfitting and elevate the performance of the deep networks. In our experiments with MNIST and CIFAR10, the output spike train of LIF neurons was retained to compute the kernel loss, as described in (Zhang and Li, 2020). For CIFAR100, we directly employed softmax for performance.
We utilized the Adam optimizer with hyperparameters betas set to (0.9, 0.999), and the \(lr=5\times 10^{-4}\) with cosine annealing learning rate scheduler(\(T=\text{epochs}\)). We set batch size to 64 and trained for 200 epochs. All images were transformed into currents to serve as network input. Our code is adapted from (Zhang and Li, 2020).
For all HoSNN experiments, a preliminary training phase was carried out using an LIF SNN, sharing the same architecture, on the clean datasets to deduce the NDS. Hyperparameters for LIF and TA-LIF neurons included a simulation time \(T=5\), a Membrane Voltage Constant \(\tau_{m}=5\), and a Synapse Constant \(\tau_{s}=3\). For the TA-LIF results in the main text, we assigned \(\tau_{v}\) initialization values of 1.5, 5, and 5 for MNIST, CIFAR10, and CIFAR100, respectively. All neurons began with an initial threshold of 1. The step function was approximated using \(\sigma(x)=\frac{1}{1+\epsilon-5x}\), and the BPTT learning algorithm was employed. For TA-ALIF neurons, the learning rate for \(\tau_{v}\) was set at a tenth of the rate designated for weights, ensuring hyperparameter stability during training. We also constrained \(\tau_{v}\) to remain non-negative during optimization, ensuring a possible transition from TA-LIF to LIF.
Regarding adversarial attack, we use an array of attack strategies, including FGSM, RFGSM, PGD, and BIM. For both CIFAR10 and CIFAR100, we allocated an attack budget with \(\epsilon=8/255\). For iterative schemes like PGD, we set \(\alpha=0.01\) and \(steps=7\), aligning with the recommendations in (Ding et al., 2022). For the adversarial training phase, FGSM training was used with \(\epsilon\) values of 2/255 for CIFAR10 (as per (Ding et al., 2022)) and 4/255 for CIFAR100, following (Kundu, Pedram, and Beerel, 2021).
## Supplementary Data
In this supplementary section, we present additional results on the clean CIFAR10 and CIFAR100 datasets to further elucidate the inherent robustness of HoSNNs, as illustrated in Figures 4(a) and 4(b). For both datasets, three distinct HoSNNs were trained separately, each initialized with different values of \(\tau_{v}\). Their performance is then compared with the LIF SNN model when subjected to white-box PGD attacks. The parameters for the PGD attack are the same as those described in the main manuscript, with an iteration step of 7 and \(\alpha=\epsilon/3\). Our findings highlight that HoSNNs require a considerably higher perturbation budget to induce a performance drop similar to that observed in LIF SNNs. For instance, on the CIFAR10 dataset, a PGD attack with \(\epsilon=8/255\) reduces the LIF SNN's classification accuracy to virtually zero. In contrast, the HoSNN retains a classification accuracy of up to 54.19%. Similarly, for the CIFAR100 dataset, a PGD attack with \(\epsilon=2/255\) results in a meager accuracy of 11.62% for the LIF SNN, while the HoSNN achieves an accuracy of up to 36.89%. |
2308.06277 | Descriptive complexity for neural networks via Boolean networks | We investigate the descriptive complexity of a class of neural networks with
unrestricted topologies and piecewise polynomial activation functions. We
consider the general scenario where the running time is unlimited and
floating-point numbers are used for simulating reals. We characterize these
neural networks with a rule-based logic for Boolean networks. In particular, we
show that the sizes of the neural networks and the corresponding Boolean rule
formulae are polynomially related. In fact, in the direction from Boolean rules
to neural networks, the blow-up is only linear. We also analyze the delays in
running times due to the translations. In the translation from neural networks
to Boolean rules, the time delay is polylogarithmic in the neural network size
and linear in time. In the converse translation, the time delay is linear in
both factors. We also obtain translations between the rule-based logic for
Boolean networks, the diamond-free fragment of modal substitution calculus and
a class of recursive Boolean circuits where the number of input and output
gates match. | Veeti Ahvonen, Damian Heiman, Antti Kuusisto | 2023-08-01T15:43:51Z | http://arxiv.org/abs/2308.06277v2 | # Descriptive complexity for neural networks via Boolean networks
###### Abstract
We investigate the descriptive complexity of a class of neural networks with unrestricted topologies and piecewise polynomial activation functions. We consider the general scenario where the running time is unlimited and floating-point numbers are used for simulating reals. We characterize a class of these neural networks with a rule-based logic for Boolean networks. In particular, we show that the sizes of the neural networks and the corresponding Boolean rule formulae are polynomially related. In fact, in the direction from Boolean rules to neural networks, the blow-up is only linear. We also analyze the delays in running times due to the translations. In the translation from neural networks to Boolean rules, the time delay is polylogarithmic in the neural network size and linear in time. In the converse translation, the time delay is linear in both factors.
email: firstname.lastname@tuni.fi
## 1 Introduction
This article investigates the descriptive complexity of neural networks, giving a logical characterization for a class of general neural networks with unrestricted network topologies and unlimited running time. The characterization is based on _Boolean networks_[3, 11]. Boolean networks have a long history, originating from the work of Kauffman in the 1960s [8]. Current applications include a wide variety of research relating to topics varying from biology and medicine to telecommunications and beyond. For recent work, see, e.g., [13, 12].
Boolean networks are usually not defined via a logical syntax, but it is easy to give them one as follows. Consider the set \(\mathcal{T}=\{X_{1},\ldots,X_{k}\}\) of Boolean variables. A _Boolean rule_ over \(\mathcal{T}\) is an expression of the form \(X_{i}\colon-\varphi\) where \(X_{i}\in\mathcal{T}\) is a _head predicate_ and \(\varphi\) is a Boolean formula over the syntax \(\varphi::=\top\mid X_{j}\mid\neg\varphi\mid\varphi\land\varphi\), where \(X_{j}\in\mathcal{T}\). A _Boolean program_ over \(\mathcal{T}\) is then a set of Boolean rules (over \(\mathcal{T}\)), one rule for each \(X_{i}\). Given an input \(f:\mathcal{T}\to\{0,1\}\) and executing the rules in parallel, the program then produces a time-series of \(k\)-bit strings in a natural way (see the preliminaries section for the full details). An extended Boolean program over \(\mathcal{T}\) is a Boolean program over some \(\mathcal{S}\supseteq\mathcal{T}\) together with a _terminal clause_\(X_{j}(0)\colon-b\) for each \(X_{j}\in\mathcal{S}\setminus\mathcal{T}\), where \(b\in\{\top,\bot\}\). Extended programs produce time-series just like regular programs, but they also contain _auxiliary variables_\(X_{j}\in\mathcal{S}\setminus\mathcal{T}\) whose initial value is not part of the input but is instead given via a terminal clause (cf. the preliminaries section). The logic used in this paper, **Boolean network logic** BNL, consists of extended Boolean programs. It turns out that BNL is also closely related to the diamond-free fragment of _modal substitution calculus_ MSC used
in [9] to characterize distributed message passing automata. Calling that fragment SC (for _substitution calculus_), we prove that programs of SC and BNL can be translated to each other with only a linear increase in program size. Thereby our characterization via BNL can alternatively be obtained via SC.
The neural network (NN) model we consider is very general. We allow unrestricted topologies for the underlying directed graphs, including loops, thereby considering the recurrent setting. The reals are modeled via floating-point numbers and the running times are unlimited. We show that for each NN, there exists a corresponding program of BNL that simulates the time series of the NN for each input, and vice versa, BNL-programs can--likewise--be simulated by NNs. Furthermore, the sizes of the NNs and BNL-programs are shown to be polynomially related.
In a bit more detail, let \(S=(p,q,\beta)\) denote a floating-point system with _fraction precision_\(p\), _exponent precision_\(q\) and _base_\(\beta\) (see Section 3.2 for the definitions). Let \(N\) denote the number of nodes in an NN and \(\Delta\) the maximum degree of the underlying graph. Modeling activation functions via piecewise polynomial functions, let \(P\) denote the number of pieces required and \(\Omega\) the maximum order of the involved polynomials. Then the following holds.
**Theorem 4.1**.: _Given a general neural network \(\mathcal{N}\) for \(S=(p,q,\beta)\) with \(N\) nodes, degree \(\Delta\), piece-size \(P\) and order \(\Omega\), we can construct a BNL-program \(\Lambda\) such that \(\mathcal{N}\) and \(\Lambda\) are asynchronously equivalent in \(S\) where for \(r=\max\{p,q\}\),_
1. _the size of_ \(\Lambda\) _is_ \(\mathcal{O}(N(\Delta+P\Omega^{2})(r^{4}+r^{3}\beta^{2}+r\beta^{4}))\)_, and_
2. _the computation delay of_ \(\Lambda\) _is_ \(\mathcal{O}((\log(\Omega)+1)(\log(r)+\log(\beta))+\log(\Delta))\)_._
Here (and also in the below theorem) _asynchronous equivalence_ means that the modeled time series can be repeated but with a delay between _significant computation rounds_. The time delays in our results are not arbitrary but rather modest. For modeling Boolean network logic via a general neural network, let the depth of a program refer to the maximum nesting depth of Boolean formulas appearing in rules. Our result is for NNs that use the activation function \(\operatorname{ReLU}(x)=\max\{0,x\}\), but it can be generalized for other activation functions.
**Theorem 4.2**.: _Given a BNL-program \(\Lambda\) of size \(s\) and depth \(d\), we can construct a general neural network \(\mathcal{N}\) for any floating-point system \(S\) with at most \(s\) nodes, degree at most \(2\), \(\operatorname{ReLU}\) activation functions and computation delay \(\mathcal{O}(d)\) (or \(\mathcal{O}(s)\) since \(s>d\)) such that \(\Lambda\) and \(\mathcal{N}\) are asynchronously equivalent in binary._
It is worth noting that in our setting, while we allow for general topologies and unlimited running times, our systems have inherently finite input spaces. In the framework of NN models, this is a well-justified assumption for a wide variety of modeling purposes. Our results stress the close relations between the size and time resources of general NNs and BNL-programs. Furthermore, as outputs we consider time series rather than a single-output framework. Indeed, it is worth noting that trivially a single Boolean function suffices to model any NN with a finite input space when limiting to single outputs only and not caring about size and time blow-ups in translations.
**Related work.** The closely related topic of descriptive complexity of graph neural networks (or GNNs) has been studied by Barcelo et al. in [2], and by Grohe in [5], [4]. In [5], the GNNs operate via feedforward neural networks, and a natural connection between these models and the circuit complexity class \(\mathsf{TC}^{0}\) is established via logic. The feedforward model in [5] uses _rational piecewise linear approximable_ activation functions, meaning that the parameters of the linear representations of activation functions are finitely representable in
a binary floating-point system. In the current paper, we allow floating-point systems with an arbitrary base, which can be useful, as a change of base often allows inadmissible reals to become admissible. Moreover, our activation functions are piecewise polynomially definable, meaning that most of the widely used activation functions are directly representable in our framework, e.g., ReLU. Furthermore, practically all activation functions are reasonably approximable.
Neural networks are special kinds of distributed systems, and descriptive complexity of distributed computing was initiated in Hella et al. [6], Kuusisto [9] and Hella et al. [7] by relating distributed complexity classes to modal logics. While [6] and [7] gave characterizations to constant-time distributed complexity classes via modal logics, [9] lifted the work to general non-constant-time algorithms by characterizing finite distributed message passing automata via the modal substitution calculus MSC. This was recently lifted to circuit-based networks with identifiers in Ahvonen et al. [1], also utilizing modal substitution calculus. The logic MSC has been linked to a range of other logics. For example, in [9], it is shown to contain the \(\mu\)-fragment of the modal \(\mu\)-calculus, and it is easy to translate MSC into a fragment of partial fixed-point logic. Building on [9], Reiter shows in [10] that this fragment of the modal \(\mu\)-calculus captures finite message passing automata in the asynchronous setting. It is worth noting here that also [2] utilizes modal logic, establishing a match between aggregate-combine graph neural networks and graded modal logic. The logics BNL and SC used in this article are rule-based systems. Rule-based logics are used widely in various applications, involving systems such as Datalog, answer-set programming (ASP) formalisms, and many others.
## 2 Preliminaries
First we introduce some basic concepts. For any set \(S\), we let \(\wp(S)\) denote the power set of \(S\) and we let \(|S|\) denote the size (or cardinality) of \(S\). We let \(\mathbb{N}\) and \(\mathbb{Z}_{+}\) denote the sets of non-negative and positive integers respectively. For every \(n\in\mathbb{Z}_{+}\), we let \([n]=\{1,\dots,n\}\) and \([0;n]=\{0,\dots,n\}\). We let bold lower-case letters \(\mathbf{a},\mathbf{b},\mathbf{c},\dots\) denote strings. The letters of a string are written directly next to each other, i.e. \(abc\), or with dots in-between, i.e. \(a\cdot b\cdot c\), or a mix of both, i.e. \(abc\cdot def\). Omitted segments of strings are represented with three dots, i.e. \(abcd\cdots wxyz\). If \(\mathbf{s}=s_{0}\cdots s_{k-1}\) is a string of length \(k\), then for any \(j\in[0;k-1]\), we let \(\mathbf{s}(j)\) denote the letter \(s_{j}\). The alphabet for the strings will depend on the context. We let \(\mathrm{VAR}=\{\,V_{i}\mid i\in\mathbb{N}\,\}\) denote the (countably infinite) set of all **schema variables**. Mostly, we will use meta variables \(X\), \(Y\), \(Z\) and so on, to denote symbols in \(\mathrm{VAR}\). We assume a linear order \(<^{\mathrm{VAR}}\) over the set \(\mathrm{VAR}\). Moreover, for any set \(\mathcal{T}\subseteq\mathrm{VAR}\), a linear order \(<^{\mathcal{T}}\) is induced by \(<^{\mathrm{VAR}}\). We let \(\mathrm{PROP}=\{\,p_{i}\mid i\in\mathbb{N}\,\}\) denote the (countably infinite) set of **proposition symbols** that is associated with the linear order \(<^{\mathrm{PROP}}\), inducing a linear order \(<^{P}\) over any subset \(P\subseteq\mathrm{PROP}\). We let \(\Pi\subseteq\mathrm{PROP}\) denote a finite subset of proposition symbols.
### Discrete time series
Next we consider infinite sequences of bit strings, i.e., we consider _discrete time series_ of strings over the alphabet \(\{0,1\}\). To separate important strings from less important ones, we need to define when a time series produces an output; importantly, we allow an arbitrary number of outputs. We will define two separate general output conditions for time series. In the first approach, special bits indicate when to output. In the second approach the output rounds are fixed, and we do not include bits that indicate when to output.
The formal definition for the first approach is as follows. Let \(k\in\mathbb{Z}_{+}\) and let \(B\) denote
an infinite sequence \((\mathbf{b}_{j})_{j\in\mathbb{N}}\) of \(k\)-bit strings \(\mathbf{b}_{j}\in\{0,1\}^{k}\). Let \(A\subseteq[k]\) be subset, called **attention** bits and respectively \(P\subseteq[k]\), called **print** bits (or bit positions, strictly speaking). The sets \(A\) and \(P\) induces a corresponding subsequences \((\mathbf{a}_{j})_{j\in\mathbb{N}}\) and \((\mathbf{p}_{j})_{j\in\mathbb{N}}\) of the strings in \(B\). More formally, \((\mathbf{a}_{j})_{j\in\mathbb{N}}\) records the subsequences with positions in \(A\), and respectively \((\mathbf{p}_{j})_{j\in\mathbb{N}}\) records the subsequences with positions in \(P\). Next we define output conditions for \(B\) with respect to attention and print bits. If at least one bit in \(\mathbf{n}_{n}\) is \(1\) (for some \(n\in\mathbb{N}\)), then we say that \(B\)**outputs**\(\mathbf{p}_{n}\) in round \(n\) and that \(n\) is an **output round**. More precisely, \(B\)**outputs in round \(n\) with respect to \((k,A,P)\)**, and \(\mathbf{p}_{n}\) is the **output of \(S\) in round \(n\) with respect to \((k,A,P)\)**. Let \(O\subseteq\mathbb{N}\) be the set of output rounds; they induce a subsequence \((\mathbf{b}_{i})_{i\in O}\) of \(S\). We call the sequence \((\mathbf{p}_{i})_{i\in O}\) the **output sequence** of \(S\).
Next we define an output condition where output rounds are fixed by a set \(O\subseteq\mathbb{N}\) and attention bits are excluded. We say that \(S\)**outputs** in rounds \(O\) (and also, in any particular round \(n\in O\)). Outputs and output sequences w.r.t. \((k,O,P)\) are defined analogously.
We study the two approaches for the sake of generality. The difference between the two output frameworks is that the output rounds are induced internally from within the sequence in the first approach, while they are given externally from the outside in the second one. For instance, it is natural to indicate output conditions within a program if it is part of the program's design. Retroactively, it might be more natural to augment a program to draw attention to rounds the original design doesn't account for, and a different mechanism could be used to compute the output rounds, e.g., a Turing machine.
### Modal substitution calculus MSC and Boolean network logic BNL
We next define modal substitution calculus MSC introduced in [9]. Let \(\Pi\subseteq\) PROP be a finite set of proposition symbols and \(\mathcal{T}\subseteq\) VAR. A **terminal clause** (over \((\Pi,\mathcal{T})\)) is a string of the form \(V_{i}(0)\!:\!-\;\varphi\), where \(V_{i}\in\mathcal{T}\) and \(\varphi\) is defined over the language \(\varphi:=\top\mid p_{i}\mid\neg\varphi\mid\varphi\land\varphi\mid\lozenge\varphi\) where \(p_{i}\in\Pi\) (i.e., \(\varphi\) is a formula of modal logic over \(\Pi\)). An **iteration clause** (over \((\Pi,\mathcal{T})\)) is a string of the form \(V_{i}\!:\!-\;\psi\) where \(\psi\) is a \((\Pi,\mathcal{T})\)**-formula of modal substitution calculus** (or MSC) defined over the language \(\psi:\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
conditions discussed for time series. We will later discuss how either can be used to determine a set of output rounds for the program. We use attention predicates by default, and only discuss the attention function when specified.
Usually, a run of a program of MSC is defined over a Kripke-model, but a \((\Pi,\mathcal{T})\)-program of SC is defined over a **model**\(M\) of propositional logic, i.e., \(M\) is a valuation \(\Pi\to\{0,1\}\) assigning a truth value to each proposition symbol in \(\Pi\). The semantics of formulae of propositional logic in model \(M\) are defined as follows: \(M\models p_{i}\) (read: \(p_{i}\) is true in \(M\)) iff the valuation of \(p\) is \(1\), and the semantics of \(\wedge\), \(\neg\) and \(\top\) is the usual one. If \(\Pi^{\prime}\subseteq\Pi\) is the set of proposition symbols that appear in the program, the linear order \(<^{\Pi^{\prime}}\) and the model \(M\) induce a binary string \(\mathbf{i}\in\{0,1\}^{|\Pi^{\prime}|}\) that serves as input, i.e., the \(i\)th bit of \(\mathbf{i}\) is \(1\) iff the valuation of the \(i\)th proposition in \(\Pi^{\prime}\) is \(1\). The truth of a \((\Pi,\mathcal{T})\)-formula \(\psi\) in round \(n\) (written \(M\models\psi^{n}\)) is defined as follows: \(\mathbf{1}\)) \(M\models\top^{n}\) always holds, \(\mathbf{2}\)) \(M\models p_{i}^{n}\) iff \(M\models p_{i}\) (where \(p_{i}\in\Pi\)), \(\mathbf{3}\)) if \(\psi=\neg\theta\), then \(M\models(\neg\theta)^{n}\) iff \(M\not\models\theta^{n}\), \(\mathbf{4}\)) if \(\psi=(\chi\wedge\theta)\), then \(M\models(\chi\wedge\theta)^{n}\) iff \(M\models\chi^{n}\) and \(M\models\theta^{n}\), and \(\mathbf{5}\)) the truth of a head predicate \(X_{i}\) is defined separately as follows. We define that \(M\models X_{i}^{0}\) if \(M\models\varphi_{i}\) where \(\varphi_{i}\) is the body of the terminal clause of \(X_{i}\). Assume we have defined the truth of all \(\mathcal{T}\)-formulae in round \(n\). We define that \(M\models X_{i}^{n+1}\) iff \(M\models\psi_{i}^{n}\) where \(\psi_{i}\) is the body of the iteration clause of \(X_{i}\).
We then define **Boolean network logic** (or BNL) which we will later show to be equivalent to the fragment SC. Boolean network logic gets its name from Boolean networks, which are discrete dynamical systems commonly used in various fields, e.g., biology, telecommunications and various others. For example, they are used to describe genetic regulatory networks (e.g., [8]). A Boolean network consists of a set of Boolean variables, i.e. variables that only get Boolean values \(0\) or \(1\). Each variable is given an initial Boolean value called the "seed". The Boolean values of all variables are updated in discrete steps starting with the seed. In each step, each variable updates its Boolean value using its own Boolean function. The updated value is determined from the Boolean values of all variables in the previous step. There is no general syntax for Boolean networks, but BNL will give us a suitable one.
Let \(\mathcal{T}\subseteq\mathrm{VAR}\). The \(\mathcal{T}\)**-formula of Boolean network logic** (or BNL) is defined over the language \(\psi\coloneqq\top\mid V_{i}\mid\neg\psi\mid\psi\wedge\psi\), where \(V_{i}\in\mathcal{T}\) (i.e. we do not include propositions). Assume now that \(\mathcal{T}\) is a finite and nonempty. There are three main differences between \(\mathcal{T}\)-programs of BNL and SC: \(\mathbf{1}\)) The terminal clauses of BNL are either of the form \(X(0)\!\mathrel{\mathop{:}}-\top\) or \(X(0)\!\mathrel{\mathop{:}}-\bot\). \(\mathbf{2}\)) The bodies of iteration clauses are \(\mathcal{T}\)-formulae of BNL. \(\mathbf{3}\)) Each schema variable in a BNL-program has exactly one iteration clause and either one or zero terminal clauses. We let \(\mathcal{I}\) denote the predicates that do not have terminal clauses, which we call **input predicates**. For example consider a BNL-program with a terminal clause \(X(0)\!\mathrel{\mathop{:}}-\top\) and with an iteration clauses \(Y\!\mathrel{\mathop{:}}-Y\wedge X\) and \(X\!\mathrel{\mathop{:}}-\neg X\). Here \(Y\) is the input predicate and \(X\) acts as an auxiliary predicate. Bodies and head predicates of clauses are defined analogously to SC (and MSC). A BNL-program also includes print predicates and either attention predicates or an attention function \(A\!\mathrel{\mathop{:}}\{0,1\}^{k}\to\wp(\mathbb{N})\), where \(k=|\mathcal{I}|\).
The run of a program of BNL is defined over a **model**\(\mathcal{M}\), i.e. \(\mathcal{M}\) is a valuation \(\mathcal{I}\to\{0,1\}\). Analogously to a model of SC, any model for BNL and the set \(\mathcal{I}\) induce a binary string \(\mathbf{i}\in\{0,1\}^{|\mathcal{I}|}\) that serves as input. (Note that vice versa each string \(\mathbf{i}\in\{0,1\}^{|\mathcal{I}|}\) induces a model with valuation \(\mathcal{I}\to\{0,1\}\) such that \(I_{j}\mapsto\mathbf{i}(j)\) if \(I_{0},\ldots,I_{|I|-1}\) enumerates the set \(\mathcal{I}\) in the order \(<^{\mathrm{VAR}}\).) The truth of a \(\mathcal{T}\)-formula is defined analogously to SC expect for the truth value of head predicates in round \(0\). If \(X\in\mathcal{I}\), we define that \(\mathcal{M}\models X^{0}\) if the valuation of \(X\) in \(\mathcal{M}\) is \(1\). If \(X\notin\mathcal{I}\), then \(\mathcal{M}\models X^{0}\) if the body of the terminal clause of \(X\) is \(\top\).
Let \(X_{1},\ldots,X_{n}\) enumerate the set \(\mathcal{T}\) of schema variables (in the order \(<^{\mathrm{VAR}}\)), \(\Lambda\) be a \(\mathcal{T}\)-program of SC (or BNL), and \(M\) a model for SC (or respectively \(\mathcal{M}\) for BNL) that induces
an input \(\mathbf{i}\in\{0,1\}^{k}\), where \(k\) is the number of proposition symbols (or resp. the number of input predicates). Each time step (or round) \(t\in\mathbb{N}\) defines a **global configuration**\(g_{t}\colon\mathcal{T}\to\{0,1\}\). The global configuration at time step \(t\) is induced by the values of head predicates, i.e., \(g_{t}(X_{i})=1\) iff \(M\models X_{i}^{t}\) (or resp. \(\mathcal{M}\models X_{i}^{t}\)), for each \(X_{i}\in\mathcal{T}\). Thus, an SC-program (or BNL-program) also induces an infinite sequence \((\mathbf{s}_{t})_{t\in\mathbb{N}}\) called the **global configuration sequence** (with input \(\mathbf{i}\)), where \(\mathbf{s}_{t}=g_{t}(X_{1})\cdots g_{t}(X_{n})\). The set of print bits is \(\{\,i\mid X_{i}\in\mathcal{P}\,\}\), where \(\mathcal{P}\) is the set of print predicates. If the program has attention predicates \(\mathcal{A}\), then the set of attention bits is \(\{\,i\mid X_{i}\in\mathcal{A}\,\}\). If the program has an attention function \(A\), then the output rounds are given by \(A(\mathbf{i})\). Therefore, a program of SC or BNL with an input \(\mathbf{i}\) also induces **output rounds** and an **output sequence** w.r.t. \((n,\mathcal{A},\mathcal{P})\) (or resp. w.r.t. \((n,\mathcal{A}(\mathbf{i}),\mathcal{P})\)).
We say that a \((\Pi,\mathcal{T})\)-program of SC and a \(\mathcal{T}^{\prime}\)-program of BNL (or likewise, two BNL-programs) are **asynchronously equivalent** if they have the same output sequences with every input. We say that they are **globally equivalent** if they _also_ have the same global configuration sequences and output rounds with each input (note that identical inputs require that \(|\Pi|=\mathcal{I}\), where \(\mathcal{I}\) is the set of input predicates of the BNL-program).
The **size** of a program of SC or BNL is defined as the number of appearances of \(\top\), proposition symbols \(p_{i}\), head predicates \(V_{i}\) and logical connectives \(\neg\) and \(\land\) in its terminal and iteration clauses. The **depth**\(d(\psi)\) of a BNL-formula or SC-formula is defined recursively such that \(d(p_{i})=d(\top)=d(X)=0\), where \(p_{i}\) is proposition symbol and \(X\) schema variable, \(d(\neg\psi)=d(\psi)+1\) and \(d(\psi\land\theta)=\max\{d(\psi),d(\theta)\}+1\). The **depth** of a BNL-program is the maximum depth of the bodies of iteration clauses.
BNL-programs inherit a number of properties from Boolean networks. Each reachable combination of truth values for the head predicates (i.e., each reachable global configuration) is called a **state** and together they form a **state space**. Note that certain global configurations may not be reachable, because neither they nor their preceding states are possible states at round \(0\) due to the terminal clauses of the BNL-program. Given that the number of states is finite, a BNL-program will eventually either reach a single stable state or begin looping through a sequence of states. A stable state is called a **point attractor**, a **fixed-point attractor** or simply a **fixed point**, whereas a looping sequence of multiple states is a **cycle attractor**. The smallest amount of time it takes to reach an attractor from a given state is called the **transient time** of that state. The **transient time** of a BNL-program is the maximum transient time of a state in its state space [3]. The concept of transient time is also applicable to SC, since it is also deterministic and eventually stabilizes with each input.
Consider the fragment BNL\({}_{0}\) where no head predicate of a program is allowed to have a terminal clause. The programs of this logic BNL\({}_{0}\) are an exact match with Boolean networks; each program encodes a Boolean network, and vice versa. The logic BNL extends this framework by allowing terminal clauses.
A BNL-program that only has fixed points (i.e., no input leads to a cycle attractor) and outputs precisely at fixed points, is called a **halting** BNL-**program**. For a halting BNL-program \(\Lambda\) with input predicates \(\mathcal{I}\) and print predicates \(\mathcal{P}\), each input \(\mathbf{i}\in\{0,1\}^{|\mathcal{I}|}\) results in a single (repeating) **output** denoted by \(\Lambda(\mathbf{i})\), which is the output string determined by the fixed-point values of the print predicates. In this sense, a halting BNL-program is like a function \(\Lambda\colon\{0,1\}^{|\mathcal{I}|}\to\{0,1\}^{|\mathcal{P}|}\). We say that \(\Lambda\)**specifies** a function \(f\colon\{0,1\}^{\ell}\to\{0,1\}^{k}\) if \(|\mathcal{I}|=\ell\), \(|\mathcal{P}|=k\) and \(\Lambda(\mathbf{i})=f(\mathbf{i})\) for all \(\mathbf{i}\in\{0,1\}^{\ell}\). The **computation time** of a halting BNL-program is its transient time.
We introduce two useful tools that are used when constructing BNL-programs (these tools are also definable via MSC or SC). Flagging is one of the most useful tools similar to adding "if-else" conditions in programming. Given two formulae \(\varphi\) and \(\chi\), and a rule \(X:\neg\,\psi\)
**flagging**\(X\) (w.r.t. \(\varphi\) and \(\chi\)) means rewriting the rule \(X\!:\!-\)\(\psi\) as \(X\!:\!-\)\((\varphi\wedge\psi)\vee(\neg\varphi\wedge\chi)\). Now, if \(\varphi\) is true then the truth value of \(X\) depends on the truth value of \(\psi\), and if \(\varphi\) is false then the truth value of \(X\) depends on the truth value of \(\chi\). We call \(\varphi\) the **flag** and \(\chi\) the **backup**. Often \(\chi\) is \(X\) itself meaning that the truth value of \(X\) does not change if \(\varphi\) is false.
A **one-hot counter** is defined as a sequence of schema variables \(T_{0},T_{1},\ldots,T_{n}\) with the terminal clauses \(T_{0}(0)\!:\!-\)\(\top\) and \(T_{i}(0)\!:\!-\)\(\bot\) for all \(i\geq 1\), and iteration clauses \(T_{0}\!:\!-\)\(T_{n}\) and \(T_{i}\!:\!-\)\(T_{i-1}\) for all \(i\geq 1\). Exactly one of these schema variables is true in any one time step, and they turn on in a looping sequence from left to right. \(T_{t}\) is true in round \(t\) for all \(t\leq n\). In round \(n+1\), \(T_{0}\) is true again and the cycle continues. This is ideal for flagging: \(T_{n}\) can be used as a flag for attention predicates to trigger an output round once every \(n\) time steps.
We are ready to prove that BNL is equivalent to SC; for the full proof, see the appendix.
**Theorem 2.1**.: _Each SC-program has an asynchronously equivalent BNL-program of linear size and transient time, and each BNL-program has a globally equivalent SC-program of linear size._
Proof.: (Sketch) From SC to BNL, we create a BNL-program that uses one time step to compute the terminal clauses of the SC-program; the terminal clauses of the SC-program are embedded into the iteration clauses of the BNL-program using a flag. From BNL to SC, we amend the BNL-program with the missing terminal clauses using proposition symbols.
## 3 Arithmetic with BNL
In this section we first show how to carry out integer addition and multiplication in Boolean network logic in parallel. We then extend this demonstration to floating-point arithmetic, including floating-point polynomials and piecewise polynomial functions.
The algorithms we use for integers are mostly well known and thus some of the formal details are in the appendix. Informally, the idea is to split both addition and multiplication into simple steps that are executed in parallel. We will show that we can simulate integer arithmetic (respectively, floating-point arithmetic) by programs whose size is polynomial in the size of the integers (resp., respectively in the size of the floating-point system). We also analyze the time delays of the constructed programs. The time delay is polylogarithmic in the size of the integers (and resp. in the size of the floating-point system) and sometimes even a constant. Ultimately, the same applies to floating-point polynomials and piecewise polynomial functions.
### Integer arithmetic
We next define how a _halting_ BNL-program simulates integer functions in an arbitrary base \(\beta\in\mathbb{Z}\), \(\beta\geq 2\). Informally, we represent integers with bit strings that are split into substrings of length \(\beta\), where exactly one bit in each substring is \(1\) and the others are \(0\). Formally, let \(\mathbf{s}_{1},\ldots,\mathbf{s}_{k}\in\{0,1\}^{\beta}\) be **one-hots**, i.e. bit strings with exactly one \(1\). We say that \(\mathbf{s}=\mathbf{s}_{1}\cdots\mathbf{s}_{k}\)**corresponds** to \(b_{1}\cdots b_{k}\in[0;\beta-1]^{k}\) if for every \(b_{i}\), we have \(\mathbf{s}_{i}(b_{i})=1\) (and other values in \(\mathbf{s}_{i}\) are zero). For example, if \(\beta=5\), then \(00100\cdot 01000\cdot 00001\in\{0,1\}^{\beta.3}\) corresponds to \(2\cdot 1\cdot 4\in[0;4]^{3}\). We say that \(\mathbf{s}\) is a **one-hot representation** of \(b_{1}\cdots b_{k}\).
Using the binary one-hot representations, we can present integers in BNL by assigning each bit with a head predicate that is true if and only if the bit is \(1\). The sign (\(+\) or \(-\)) of a number can likewise be handled with a single bit that is true iff the sign is positive.
**Definition 3.1**.: Let \(\beta\in\mathbb{Z}\), \(\beta\geq 2\) be a base. We say that a halting BNL-program \(\Lambda\)**simulates** (or computes) a function \(f\colon\left[0;\beta-1\right]^{\ell}\to\left[0;\beta-1\right]^{k}\) if for each input string \(\mathbf{i}\in\{0,1\}^{\ell\beta}\) that corresponds to \(\mathbf{b}\in\left[0;\beta-1\right]^{\ell}\), the output \(\Lambda(\mathbf{i})\) also corresponds to \(f(\mathbf{b})\).
#### Parallel addition
In this section we construct a parallel integer addition algorithm via BNL-programs. The algorithm is mostly well known and is based on how integer addition is computed in Nick's class \(\mathsf{NC}^{1}\) (sometimes called the carry-lookahead method), i.e., we parallelize the textbook method (sometimes called the long addition algorithm). Here the main difference to integer addition in Nick's class is that we generalize the algorithm for arbitrary bases.
**Lemma 3.2**.: _Given a base \(\beta\in\mathbb{Z}\), \(\beta\geq 2\) and \(p\in\mathbb{Z}_{+}\), adding two numbers in \([0;\beta-1]^{p}\) can be simulated with a (halting) \(\mathrm{BNL}\)-program of size \(\mathcal{O}(p^{3}+p\beta^{2})\) and computation time \(\mathcal{O}(1)\)._
Proof.: (Sketch) The full formal program and more rigorous explanations are in the appendix. Informally, the textbook method of adding integers (in arbitrary base \(\beta\)) is done by aligning the digits vertically and adding the single digits one-by-one from right to left. If the sum of digits exceeds \(\beta-1\), then \(1\) is _carried_ to the next addition on the left, i.e, \(1\) is a _carry_. The main difference to parallel integer addition is that the addition is done simultaneously and the hard part is to compute carries in parallel. In order to compute a carry, we have to check if the other additions to the right lead to a carry. It is straightforward to write a BNL-program that computes carries in \(\mathcal{O}(p^{3}+p\beta^{2})\) space and \(\mathcal{O}(1)\) time steps.
#### Parallel multiplication
In this section we introduce a parallel multiplication algorithm. The parallelization method is mostly well known and is based on "cutting" multiplication to simple addition tasks.
**Lemma 3.3**.: _Given a base \(\beta\in\mathbb{Z}\), \(\beta\geq 2\), multiplication of any two numbers in \([0;\beta-1]^{p}\) can be simulated with a (halting) \(\mathrm{BNL}\)-program of size \(\mathcal{O}(p^{4}+p^{3}\beta^{2}+p\beta^{4})\) and computation time \(\mathcal{O}(\log(p)+\log(\beta))\)._
Proof.: (Sketch) The formal explanations and examples are in the appendix. Assume that we have two \(p\)-digit integers (we allow leading zeros) a multiplicand \(\mathbf{x}\) and a multiplier \(\mathbf{y}=y_{p}\cdots y_{1}\) in an arbitrary base \(\beta\in\mathbb{Z}\), \(\beta\geq 2\). The parallel multiplication algorithm computes in the following two steps. **(1)** We run \(p\) different multiplications in parallel where the multiplicand \(\mathbf{x}\) is multiplied by \(y_{l}0\cdots 0\) with \(i-1\) zeros on the right (for each \(i\in[p]\) in base \(\beta\)). Each multiplication is actually also computed in parallel by using the parallel addition algorithm to obtain relatively small space and time complexities. As a result we obtain \(p\) different numbers of length \(2p\). **(2)** We add the numbers obtained in the first step together in parallel using the parallel addition algorithm.
### Floating-point arithmetic
In this section we consider floating-point arithmetic, including polynomials and piecewise polynomial functions. We show that BNL-programs can simulate these in polynomial space and in polylogarithmic time, and some simple arithmetic operations even in constant time.
### Floating-point system
A **floating-point number** in a system \(S=(p,q,\beta)\) (where \(p,q,\beta\in\mathbb{Z}_{+}\), \(\beta\geq 2\)) is a number that can be represented in the form
\[\pm\underbrace{0.d_{1}d_{2}\cdots d_{p}}_{=f}\times\beta^{\pm e_{1}\cdots e_{ q}},\]
where \(d_{i},e_{i}\in[0;\beta-1]\). For such a number in system \(S\), we call \(f\) the **fraction**, the dot between \(0\) and \(d_{1}\) the **radix point**, \(p\) the **precision**, \(e=\pm e_{1}\cdots e_{q}\) the **exponent**, \(q\) the **exponent precision** and \(\beta\) the **base** (or **radix**).
A floating-point number in a system \(S\) may have many different representations such as \(0.10\times 10^{1}\) and \(0.01\times 10^{2}\) which are both representations of the number \(1\). To ensure that our calculations are well defined, we desire a single form for all non-zero numbers. We say that a floating-point number (or more specifically, a floating-point representation) is **normalized**, if \(d_{1}\neq 0\), or if \(f=0\), \(e\) is the smallest possible value and the sign of the fraction is \(+\).
For a floating-point system \(S=(p,q,\beta)\), we define an extended system of **raw floating-point numbers**\(S^{+}(p^{\prime},q^{\prime})\) (where \(p^{\prime}\geq p\) and \(q^{\prime}\geq q\)) that possess a representation of the form \(\pm d_{0}.d_{1}d_{2}\cdots d_{p^{\prime}}\times\beta^{\pm e_{1}\ldots e_{q^{ \prime}}}.\) When performing floating-point arithmetic, the precise outcomes of the calculations may be raw numbers, i.e., no longer in the same system as the operands strictly speaking. Therefore, in practical scenarios, we have \(p^{\prime}=\mathcal{O}(p)\) and \(q^{\prime}=\mathcal{O}(q)\). Consider, e.g., the numbers \(99\) and \(2\) which are both in the system \(S=(2,1,10)\), but their sum \(101\) is not, because \(3\) digits are required to represent the fraction precisely. For this purpose, we must round numbers. The easiest way to round a number is **truncation**, where the least significant digits of the number are simply omitted, rounding the number toward zero. The most common method is to round to the nearest floating-point number, with ties rounding to the number with an even least significant digit. This is called **round-to-nearest ties-to-even**. Truncation can be performed in BNL with constant space and time, but the latter rounding method requires \(\mathcal{O}(p^{2}\beta^{2})\) space and \(\mathcal{O}(1)\) time steps. It is trivial to construct a BNL-program that computes rounding, so we do not go into detail.
### Representing floating-point numbers in binary
Our way of representing floating-points of arbitrary base in binary is based on international standards (e.g. IEEE 754). Informally, if \(\mathbf{b}\) represents a floating-point number in a system \(S=(p,q,\beta)\), then the first two bits encode the signs of the exponent and fraction. The next \(q\beta\) bits encode the exponent in base \(\beta\), and the last \(p\beta\) bits encode the fraction in base \(\beta\).
Before we go into the details, we have to define simulation of functions that compute with floating-point numbers in a system \(S=(p,q,\beta)\). Let \(F=\pm f\times\beta^{\pm e}\) be a floating-point number in system \(S\). Let \(\mathbf{p}_{1},\mathbf{p}_{2}\in\{0,1\}\) and \(\mathbf{s}_{1},\ldots,\mathbf{s}_{q},\mathbf{s}_{1}^{\prime},\ldots,\mathbf{s}_ {p}^{\prime}\in\{0,1\}^{\beta}\). We say that \(\mathbf{s}=\mathbf{p}_{1}\mathbf{p}_{2}\mathbf{s}_{1}\cdots\mathbf{s}_{q} \mathbf{s}_{1}^{\prime}\cdots\mathbf{s}_{p}^{\prime}\)**corresponds** to \(F\) (or \(\mathbf{s}\) is a **one-hot representation** of \(F\)) if **(1)**\(\mathbf{p}_{1}=1\) iff the sign of the exponent is \(+\), **(2)**\(\mathbf{p}_{2}=1\) iff the sign of the fraction is \(+\), **(3)**\(\mathbf{s}_{1}\cdots\mathbf{s}_{q}\) corresponds to \(e=e_{1}\cdots e_{q}\), and **(4)**\(\mathbf{s}_{1}^{\prime}\cdots\mathbf{s}_{p}^{\prime}\) corresponds to \(f=0.d_{1}d_{2}\cdots d_{p}\) (or, more precisely, to \(d_{1}\cdots d_{p}\)). Likewise, we say that a bit string \(\mathbf{s}\)**corresponds** to a sequence \((F_{1},\ldots,F_{k})\) of floating-point numbers if \(\mathbf{s}\) is the concatenation of the bit strings that correspond to \(F_{1},\ldots,F_{k}\) from left to right. For example, in the system \(S=(4,3,3)\) the number \(-0.2001\times 3^{+120}\) has the corresponding string \(\underbrace{1}_{\mathbf{p}_{1}}\cdot\underbrace{0}_{\mathbf{p}_{2}}\cdot \underbrace{010\cdot 001\cdot 100}_{\mathbf{s}_{1}\mathbf{s}_{2}\mathbf{s}_{3}}\cdot \underbrace{001\cdot 100\cdot 100\cdot 010}_{\mathbf{s}_{1}^{\prime}\mathbf{s}_{ 1}^{\prime}\mathbf{s}_{s}^{\prime}\mathbf{s}_{s}^{\prime}\mathbf{s}_{s}^{ \prime}\mathbf{s}_{4}^{\prime}}\)
**Definition 3.4**.: Let \(S=(p,q,\beta)\) be a floating-point system. We say that a halting BNL-program \(\Lambda\)**simulates** a function \(f\colon S^{\ell}\to S^{k}\), if the output \(\Lambda(\mathbf{i}_{1}\cdots\mathbf{i}_{\ell})\) corresponds to \(f(F_{1},\ldots,F_{\ell})\) for any \(F_{1},\ldots,F_{\ell}\in S\) and the corresponding inputs \(\mathbf{i}_{1},\ldots,\mathbf{i}_{\ell}\in\{0,1\}^{2+\beta(p+q)}\).
Later when we construct programs for the floating-point operations e.g. normalization, we will use a tool called **shifting**, which means moving each digit of a fraction to the left or right by one (e.g. shifting a fraction \(0.012\) once to the left leads to \(0.120\))
#### Normalizing a floating-point number
We informally describe how the normalization of raw floating-point numbers can be done. By normalization we mean that a raw floating-point number is normalized as described above.
**Lemma 3.5**.: _Let \(S=(p,q,\beta)\) be a floating-point system. Normalization of a raw floating-point number in \(S^{+}(p^{\prime},q^{\prime})\) to the floating-point system \(S\), where \(p^{\prime}=\mathcal{O}(p)\) and \(q^{\prime}=\mathcal{O}(q)\), can be simulated with a (halting) \(\mathrm{BNL}\)-program of size \(\mathcal{O}(r^{3}+r^{2}\beta^{2})\) and computation time \(\mathcal{O}(1)\), where \(r=\max\{p,q\}\)._
Proof.: The full formal explanations and details are in the appendix. Let \(S=(p,q,\beta)\) be a floating-point system. The normalization of a raw floating-point number \(f\times\beta^{e}\) (we do not write down the signs here) in system \(S^{+}(p^{\prime},q^{\prime})\) to the system \(S\), where \(p^{\prime}=\mathcal{O}(p)\) and \(q^{\prime}=\mathcal{O}(q)\) can be split into the following cases.
1. If \(f=0\), we only set \(e\) to the smallest possible value and the sign of the fraction to \(+\).
2. If \(|f|<1\), then we can calculate in a few steps how much we have to shift the fraction to the left (and increase the exponent).
3. If \(|f|\geq 1\), we shift the fraction to the right by one (and decrease the exponent by one) and, after that, round the number to match precision \(p\). The rounding might lead to a non-normalized floating-point number, but we only have to shift the number to the right again at most once (because after rounding, \(|f|\leq\beta-1\)).
The hard part is to keep the time complexity as low as possible. We do not go into the details here (full proofs are in the appendix), but the main idea is to apply parallel integer addition specified in Section 3.1.
#### Addition of floating-point numbers
In this section we show that we can simulate floating-point addition via BNL-programs, which can be done even in constant time.
**Lemma 3.6**.: _Addition of two (normalized) floating-point numbers in \(S=(p,q,\beta)\) can be simulated with a (halting) \(\mathrm{BNL}\)-program of size \(\mathcal{O}(r^{3}+r^{2}\beta^{2})\) and computation time \(\mathcal{O}(1)\), where \(r=\max\{p,q\}\)._
Proof.: (Sketch) We then very roughly sketch how the addition of two normalized floating-point numbers can be done; the full explanations are in the appendix. The parts where we add or normalize numbers, we apply the results obtained in earlier sections. The addition is done in the following steps. **(1)** We compare which of the exponents is greater and store it. **(2)** We determine the difference \(d\) between the exponents. If \(d\) is greater than the length of the fractions, we are done and output the number with the greater exponent. If \(d\) is
smaller than the length of the fractions, then we shift the fraction of the number with the smaller exponent to the right \(d\) times. We then perform integer addition on the fractions and store the result. **(3)** We obtain a number whose exponent was obtained in the first step and whose fraction was obtained in the second step. We normalize this number.
#### Multiplication of floating-point numbers
In this section we show how to simulate floating-point multiplication via BNL-programs. The multiplication takes logarithmic time, since the proof applies integer multiplication results obtained in Lemma 3.1.
**Lemma 3.7**.: _Multiplication of two (normalized) floating-point numbers in \(S=(p,q,\beta)\) can be simulated with a (halting) BNL-program of size \(\mathcal{O}(r^{4}+r^{3}\beta^{2}+r\beta^{4})\) and computation time \(\mathcal{O}(\log(r)+\log(\beta))\), where \(r=\max\{p,q\}\)._
Proof.: (Sketch) We roughly sketch how the multiplication of two (normalized) floating-point numbers is done; the full explanations are in the appendix. Informally, we do the following.
1. We add the exponents together by using the parallel (integer) addition algorithm and store the result. If the result is less than the maximum exponent, we move to the next step. Otherwise, we are done and output the largest possible number, i.e. the number with the highest possible fraction and exponent in the system.
2. We multiply the fractions using the integer multiplication algorithm and store the product.
3. We obtain a number whose exponent was obtained in the first step and whose fraction was obtained in the second step. We normalize this number.
Applying the results of parallel (integer) addition, parallel (integer) multiplication, and normalization described in the previous sections, we obtain the wanted results.
#### Floating-point polynomials and piecewise polynomial functions
Next we consider floating-point polynomials and activation functions that are piecewise polynomial. A **piecewise polynomial** function (with a single variable) is defined as separate polynomials over certain intervals of real numbers. For instance, the function "\(f(x)=x^{2}\) when \(x\geq 0\) and \(f(x)=-x\) when \(x<0\)" is piecewise polynomial; the intervals are the sets of non-negative and negative numbers and the attached polynomials are \(x^{2}\) and \(-x\). In a floating-point system, a piecewise polynomial function is an approximation, much like addition and multiplication. We perform approximations after each addition and multiplication; as a result, the calculations must be performed in some canonical order because the order of approximations will influence the result. By the number of pieces, we refer to the number of intervals that the piecewise polynomial function is defined over; our example above has 2 pieces. We obtain the following theorem.
**Theorem 3.8**.: _Assume we have a piecewise polynomial function \(\alpha\colon S\to S\), where each polynomial is of the form \(a_{n}x^{n}+\cdots+a_{1}x+a_{0}\) where \(n\in\mathbb{N}\), \(a_{i}\in S=(p,q,\beta)\) for each \(0\leq i\leq n\) and \(r=\max\{p,q\}\) (addition and multiplication approximated in \(S\)). Let \(\Omega\) be the highest order of the polynomials (or \(1\) if the highest order is \(0\)) and let \(P\in\mathbb{Z}_{+}\) be the number of pieces. We can construct a BNL-program \(\Lambda\) that simulates \(\alpha(x)\) such that_
1. _the size of_ \(\Lambda\) _is_ \(\mathcal{O}(P\Omega^{2}(r^{4}+r^{3}\beta^{2}+r\beta^{4}))\)_, and_
2. _the computation time of_ \(\Lambda\) _is_ \(\mathcal{O}((\log(\Omega)+1)(\log(r)+\log(\beta)))\)_._
Proof.: (Sketch) We only roughly sketch the proof. The full proof can be found in the appendix. We obtain BNL-programs that simulate these functions in polynomial space and polylogarithmic time. When calculating a floating-point polynomial \(a_{n}x^{n}+\cdots+a_{1}x+a_{0}\), the order of calculations is as follows: Multiplications are handled first. When carrying out the multiplication \(x_{1}\cdot x_{2}\cdot\ldots\cdot x_{k}\), we simultaneously calculate the products \(y_{1}=x_{1}\cdot x_{2}\), \(y_{2}=x_{3}\cdot x_{4}\), etc. (If \(k\) is an odd number, the multiplicand \(x_{k}\) has no pair. In this case we define \(y_{(k+1)/2}=x_{k}\).) Then, in similar fashion we calculate the products \(z_{1}=y_{1}\cdot y_{2}\), \(z_{2}=y_{3}\cdot y_{4}\), etc. We continue this until we have calculated the whole product. After multiplications, we handle the sums in identical fashion. We obtain the wanted results by simulating the additions and multiplications of each polynomial as described in Lemmas 3.6 and 3.7.
## 4 Descriptive complexity for general neural networks
In this section, we establish connections between Boolean network logic and neural networks. Informally, we define a general neural network as a weighted directed graph (with any topology) operating on floating-point numbers in some system \(S\). Each node receives either a fixed initial value or an input as its first activation value. In each communication round a node sends its activation value to its neighbours and calculates a new activation value as follows. Each node multiplies the activation values of its neighbours with associated weights, adds them together with a node-specific bias and feeds the result into a node-specific activation function. Note that floating-point systems are bounded, and the input-space of a neural network is thus finite.
Next we define neural networks formally. A **(directed) graph** is a tuple \((V,E)\), where \(V\) is a finite set of **nodes** and \(E\subseteq V\times V\) is a set of **edges**. Note that we allow self-loops on graphs, i.e. edges \((v,v)\in E\). A **general neural network**\(\mathcal{N}\) (for floating-point system \(S\)) is defined as a tuple \((G,\mathfrak{a},\mathfrak{b},\mathfrak{w},\pi)\), where \(G=(V,E,<^{V})\) is a directed graph associated with a linear order \(<^{V}\) for nodes in \(V\). The network \(\mathcal{N}\) contains sets \(I,O\subseteq V\) of **input** and **output** nodes respectively, and a set \(H=V\setminus(I\cup O)\) of **hidden nodes**. The tuples \(\mathfrak{a}=(\alpha_{v})_{v\in V}\) and \(\mathfrak{b}=(b_{v})_{v\in V}\) are assignments of a piecewise polynomial **activation function**\(\alpha_{v}\colon S\to S\) and a **bias**\(b_{v}\in S\) for each node. Likewise, \(\mathfrak{w}=(w_{e})_{e\in E}\) is an assignment of a **weight**\(w_{e}\in S\) for each edge. The function \(\pi\colon(V\setminus I)\to S\) assigns an initial value to each non-input node.
The computation of a general neural network is defined with a given input function \(i\colon I\to S\). Similarly to BNL-programs, an input function \(i\) also induces a floating-point string \(\mathbf{i}\in S^{|I|}\), and respectively a floating-point string induces an input function. **The state of the network at time \(t\)** is a function \(g_{t}\colon V\to S\), which is defined recursively as follows. For \(t=0\), we have \(g_{0}(v)=i(v)\) for input nodes and \(g_{0}(v)=\pi(v)\) for non-input nodes. Now assume we have defined the state at time \(t\). The state at time \(t+1\) is defined as follows:
\[g_{t+1}(v)=\alpha_{v}\Big{(}b_{v}+\sum_{(u,v)\in E}\big{(}g_{t}(u)\cdot w_{(u, v)}\big{)}\,\Big{)}.\]
More specifically, the sum is unfolded from left to right according to the order \(<^{V}\) of the nodes \(u\in V\). For each piece of an activation function, we assume a normal form \(a_{n}x^{n}+\cdots+a_{1}x+a_{0}\), which designates the order of operations. If we designate that \(u_{1},\ldots,u_{k}\) enumerate the set \(O\) of output nodes in the order \(<^{V}\), then the state of the system induces an output tuple \(o_{t}=(g_{t}(u_{1}),\ldots,g_{t}(u_{k}))\) at time \(t\) for all \(t\geq 0\).
We once again define two frameworks for picking out important output rounds, one machine-internal and one machine-external framework. In the first framework, the set \(V\) contains a set \(A\) of attention nodes \(u\), each of which is associated with a _threshold_\(s\in S\). If the activation value \(u\) of one of the attention nodes exceeds its threshold in some round \(t\), i.e., \(g_{t}(u)\geq s\), then \(t\) is an output round. In the second framework, we instead have an attention function of type \(a\colon S^{|I|}\to\wp(\mathbb{N})\) that assigns a set of output rounds for each input of the network; in this framework, there are no attention nodes. Analogously to BNL-programs, a neural network with \(n\) nodes, input \(\mathbf{i}\in S^{|I|}\) and either attention bits (or respectively an attention function), induces an **output sequence** and **output rounds** w.r.t. \((n,A,O)\) (or, respectively, w.r.t. \((n,a(\mathbf{i}),O)\)).
We then define some parameters that will be important when describing how neural networks and BNL-programs are related in terms of space and time complexity. The in-degree of a node \(v\) is the number of nodes \(u\) such that there is an edge \((u,v)\in E\); we say that \(u\) is a **neighbour** of \(v\). Note that we allow reflexive loops so a node might be its own "neighbour". The **degree** of a general neural network \(\mathcal{N}\) is the maximum in-degree of the underlying graph. The **piece-size** of \(\mathcal{N}\) is the maximum number of "pieces" across all its piecewise polynomial activation functions. The **order** of \(\mathcal{N}\) is the highest order of a "piece" of its piecewise polynomial activation functions.
A general neural network can easily emulate typical _feedforward neural networks_. This requires that the graph of the general neural network is connected and acyclic, the sets \(I\), \(O\) and \(H\) are chosen correctly and the graph topology is as required, with all paths from an input node to an output node being of the same length. Unlike in a classical feedforward neural network, the hidden and output nodes of a general neural network have an initial value, but they are erased as the calculations flow through the network, so this is an inconsequential, essentially syntactic phenomenon. The inputs are also erased in the same way, likewise an inconsequential syntactic phenomenon. Finally, there is a round \(t\) where the general neural network outputs the same values as a corresponding feedforward network would.
In general, our neural network models are _recurrent_ in the sense that they allow loops. They are _one-to-many_ networks, in other words, they can map each input to a sequence of outputs unlike feedforward neural networks which always map each input to a single output.
In order to translate neural networks to BNL-programs and vice versa, we define time series problems for both floating-point numbers and binary numbers, and two types of corresponding equivalence relations. The reason for this is obvious, as BNL-programs operate with binary numbers and neural networks with floating-point numbers. Informally, in the below asynchronous equivalence means that the modeled time series can be repeated but with a delay between output rounds. The time delays in our results are not arbitrary but rather modest. Moreover, we do not fix the attention mechanism for the programs or neural networks, and our definitions work in both cases.
First we define notions for floating-points. Let \(k,\ell\in\mathbb{N}\), \(P\subseteq[k]\) and let \(S=(p,q,\beta)\) be a floating-point system. We let \(\mathcal{F}(k,P,S)\) denote the family of sequences \(F=(\mathbf{f}_{n})_{n\in\mathbb{N}}\) of \(k\)-strings \(\mathbf{f}_{n}\in S^{k}\) of numbers in \(S\) with print position set \(P\). A **(floating-point) time series problem \(\mathfrak{P}\) for \((\ell,k,P)\) in \(S\)** is a function \(\mathfrak{P}\colon S^{\ell}\to\mathcal{F}(k,P,S)\times\wp(\mathbb{N})\). With a given input \((F_{1},\ldots,F_{\ell})\in S^{\ell}\), \(\mathfrak{P}\) gives a sequence \((\mathbf{f}_{n})_{n\in\mathbb{N}}\in\mathcal{F}(k,P,S)\) and a subset \(O\subseteq\mathbb{N}\) and therefore \(\mathfrak{P}\) induces the output sequences of \((\mathbf{f}_{n})_{n\in\mathbb{N}}\) w.r.t. \((k,O,P)\). Let \(\Lambda\) be a BNL-program with \((\beta(p+q)+2)|P|\) print predicates and \((\beta(p+q)+2)\ell\) input predicates. We say that \(\Lambda\)**simulates a solution** for time series problem \(\mathfrak{P}\) if for every input \(\mathbf{i}\in\{0,1\}^{(\beta(p+q)+2)\ell}\) corresponding to \((F_{1},\ldots,F_{\ell})\in S^{\ell}\), the output sequence of \(\Lambda\) with input \(\mathbf{i}\) corresponds to the output sequence induced by \(\mathfrak{P}(F_{1},\ldots,F_{\ell})\), i.e., the output
strings of \(\Lambda\) correspond to the output strings of \(\mathfrak{P}\). A neural network \(\mathcal{N}\) with \(\ell\) input nodes and \(|P|\) print nodes **solves**\(\mathfrak{P}\) if the output sequence of \(\mathcal{N}\) with input \((F_{1},\dots,F_{\ell})\) is the output sequence of \(\mathfrak{P}(F_{1},\dots,F_{\ell})\). We say that a BNL-program \(\Lambda\) and a neural network \(\mathcal{N}\) (for \(S\)) are **asynchronously equivalent in \(S\)** if the time series problems in \(S\) simulated by \(\Lambda\) are exactly the ones solved by \(\mathcal{N}\).
We define notions for binaries in similar fashion. Recall that \(k,\ell\in\mathbb{N}\), \(P\subseteq[k]\). Similarly, let \(\mathcal{S}(k,P)\) denote the family of \(k\)-bit strings sequences \(B=(\mathbf{b}_{n})_{n\in\mathbb{N}}\) with print bit set \(P\). A **(binary) time series problem \(\mathfrak{P}\) for \((\ell,k,P)\)** is a function \(\mathfrak{P}\colon\{0,1\}^{\ell}\to\mathcal{S}(k,P)\times\wp(\mathbb{N})\) that assigns a \(k\)-bit string sequence and a set \(O\in\wp(\mathbb{N})\) of output rounds to every input \(\mathbf{i}\in\{0,1\}^{\ell}\); together they induce an output sequence w.r.t. \((k,O,P)\). We say that a BNL-program (or neural network) \(x\) with \(\ell\) input predicates (resp., input nodes) and \(|P|\) print predicates (resp., print nodes) **solves**\(\mathfrak{P}\) if the output sequence of \(x\) with any input \(\mathbf{i}\in\{0,1\}^{\ell}\) is the output sequence of \(\mathfrak{P}(\mathbf{i})\). Note that actually, a neural network handles \(0\) and \(1\) in floating-point representation. We say that a BNL-program and a general neural network are **asynchronously equivalent in binary** if they solve the same binary time series problems.
We define the delay between two asynchronously equivalent objects \(x\) and \(y\). Let \(x_{1},x_{2},\dots\) and \(y_{1},y_{2},\dots\) enumerate their (possibly infinite) sets of output rounds in ascending order. Assume that the cardinality of the sets of output rounds is the same and \(x_{n}\geq y_{n}\) for every \(n\in\mathbb{N}\). If \(T\) is the smallest amount of time steps such that \(T\cdot y_{n}\geq x_{n}\) for every \(n\in\mathbb{N}\), then we say that the **computation delay** of \(x\) is \(T\). The case for \(y_{n}\geq x_{n}\) is analogous.
### From NN to BNL
We provide a translation from general neural networks to Boolean network logic. The proof is based on the results obtained for floating-point arithmetic in the previous section.
**Theorem 4.1**.: _Given a general neural network \(\mathcal{N}\) for \(S=(p,q,\beta)\) with \(N\) nodes, degree \(\Delta\), piece-size \(P\) and order \(\Omega\) (or \(1\) if the highest order is \(0\)), we can construct a BNL-program \(\Lambda\) such that \(\mathcal{N}\) and \(\Lambda\) are asynchronously equivalent in \(S\) where for \(r=\max\{p,q\}\),_
1. _the size of_ \(\Lambda\) _is_ \(\mathcal{O}(N(\Delta+P\Omega^{2})(r^{4}+r^{3}\beta^{2}+r\beta^{4}))\)_, and_
2. _the computation delay of_ \(\Lambda\) _is_ \(\mathcal{O}((\log(\Omega)+1)(\log(r)+\log(\beta))+\log(\Delta))\)_._
Proof.: First we consider the framework where output rounds are defined by attention nodes and attention predicates. We consider the setting where output rounds are fixed as a corollary.
We use separate head predicates \(S_{u,e}\), \(S_{u,f}\), \(E_{u,i,b}\), and \(F_{u,j,b}\) (\(i\in[q]\), \(j\in[p]\), \(b\in[0;\beta-1]\)) for each node \(u\) of \(\mathcal{N}\). Together, they encode the **1)** exponent sign, **2)** fraction sign, **3)** exponent and **4)** fraction of the activation values of \(u\) in one-hot representation as described in Section 3.2. These calculations are done using the arithmetic algorithms from the same section. The program can not calculate a new activation value in one step like a neural network does, as each arithmetic operation takes some time to compute. The input of a single node is a floating-point number with \(q\) digits for the exponent, \(p\) digits for the fraction, and a sign for both. Its one-hot representation therefore has \((p+q)\beta+2\) bits; exactly the number of head predicates assigned for each node. Each of these nodes receives a corresponding bit as input. For instance, if the input floating-point number of \(u\) is \(-0.314\times 10^{+01}\), then the head predicates \(S_{u,e}\), \(E_{u,1,0}\), \(E_{u,2,1}\), \(F_{u,1,3}\), \(F_{u,2,1}\) and \(F_{u,3,4}\) get the input \(1\) while all the other head predicates for \(u\) get the input \(0\).
After receiving these inputs, the rest of the program is built by applying the programs for floating-point addition and multiplication constructed in Section 3.2 to the aggregations and activation functions of each node in the established canonical order of operations. The calculations are timed with a one-hot counter, i.e., predicates \(T_{0},\ldots,T_{n}\) as described in Section 2.2. Here \(n\) is the worst-case number of rounds required for the algorithms to calculate an activation value for a node in the network (based on the number of neighbours, as well as the order and number of pieces of the activation function). The predicates in this counter are used to stall the head predicates for each node such that they receive the bits corresponding to the new activation values at the same time (this includes the output predicates, which are all the predicates corresponding to output nodes). The attention nodes have additional predicates that correspond to the threshold values; during rounds where the activation values have been calculated, an attention predicate turns true if this value is exceeded.
We compute additions and multiplications for each node in the network; this can be done simultaneously for each node. Each node requires at most \(\Delta\) multiplications and additions in the aggregation before the use of the activation function. Multiplications can be done simultaneously and sums in parallel as described in section 3. These steps require size \(\mathcal{O}(N\Delta(r^{4}+r^{3}\beta^{2}+r\beta^{4}))\) (each of the \(N\) nodes performs \(\mathcal{O}(\Delta)\) multiplications/additions; the size of the multiplication is \(\mathcal{O}(r^{4}+r^{3}\beta^{2}+r\beta^{4})\) which dwarfs the addition size \(\mathcal{O}(r^{3}+r^{2}\beta^{2}))\) and the overall time required is \(\mathcal{O}(\log(r)+\log(\beta))+\mathcal{O}(\log(\Delta))\) (multiplication + addition).
After the aggregation come the activation functions. Since they are piecewise polynomial, we may apply Theorem 3.8, using the piece-size and order of the network. If \(\Omega=0\) we are done, so assume that \(\Omega\in\mathbb{Z}_{+}\). Each of the \(N\) nodes calculates at most \(P\) polynomial pieces of order at most \(\Omega\), which gives us a size of \(\mathcal{O}(NP\Omega^{2}(r^{4}+r^{3}\beta^{2}+r\beta^{4}))\). This requires only \(\mathcal{O}((\log(\Omega)+1)(\log(r)+\log(\beta)))\) time. The same predicates are used for the calculation of each subsequent global configuration of the network. Timing the calculations does not increase the size and time complexity. Adding the sizes and times together, the size of the program is \(\mathcal{O}(N(\Delta+P\Omega^{2})(r^{4}+r^{3}\beta^{2}+r\beta^{4}))\) and computing each global configuration of \(\mathcal{N}\) requires time \(\mathcal{O}(\log(r)+\log(\beta))+\mathcal{O}(\log(\Delta))+\mathcal{O}((\log( \Omega)+1)(\log(r)+\log(\beta)))=\mathcal{O}((\log(\Omega)+1)(\log(r)+\log( \beta))+\log(\Delta))\); the first \(\mathcal{O}(\log(r)+\log(\beta))\) is not dwarfed if \(\Omega=1\).
The case for the second framework, where output rounds are given from the outside, is obtained as a corollary. We simply take the worst case time complexity for calculating a new activation value with the aggregations and piecewise polynomial functions in BNL; let's say the worst case is \(T\) rounds. If \(R\) is the set of output rounds of \(\mathcal{N}\), the output rounds of \(\Lambda\) are simply \(TR\). In other words, we spend \(T\) rounds in \(\Lambda\) to simulate a single round of \(\mathcal{N}\).
### From BNL to NN
Before the formal translation from BNL-programs to general neural networks, we introduce two typical piecewise polynomial activation functions with just two pieces and order at most \(1\). These are the well-known rectified linear unit and the Heaviside step function. Recall that an activation function is a function \(S\to S\), where \(S\) is a floating-point system. The **rectified linear unit** ReLU is defined by \(\operatorname{ReLU}(x)=\max\{0,x\}\) and the **Heaviside step function**\(H\) is defined by \(H(x)=1\) if \(x>0\), and \(H(x)=0\), otherwise. We obtain the following Theorem, where for a given BNL-program we construct a general neural network model that uses ReLU (or Heaviside) activation functions. It is easy to generalize our results for other activation functions, which we plan to do in the full version of this paper.
**Theorem 4.2**.: _Given a BNL-program \(\Lambda\) of size \(s\) and depth \(d\), we can construct a general neural network \(\mathcal{N}\) for any floating-point system \(S\) with at most \(s\) nodes, degree at most \(2\), ReLU (or Heaviside) activation functions and computation delay \(\mathcal{O}(d)\) (or \(\mathcal{O}(s)\) since \(s>d\)) such that \(\Lambda\) and \(\mathcal{N}\) are asynchronously equivalent in binary._
Proof.: (Sketch) The full proof can be found in the appendix. The aggregation each node performs on the activation values of its neighbours weakens neural networks in the sense that much of the information related to specific neighbours is lost. Due to this, a single node of a neural network can't imitate an arbitrary iteration clause where each predicate has a precise role. Instead, the program \(\Lambda\) is first turned into an asynchronously equivalent "fully-open" program \(\Lambda^{\prime}\) that is described in the appendix. Informally, that means each body of the iteration clauses of \(\Lambda^{\prime}\) includes at most one logical connective. This is turned into a neural network by creating a node for each predicate of \(\Lambda^{\prime}\). The network only uses the floating-point numbers \(-1,0,1,2\), and the iteration clauses can all be calculated with ReLU or Heaviside by choosing the weights and biases appropriately.
## 5 Conclusion
We have shown a strong equivalence between a general class of one-to-many neural networks and Boolean network logic in terms of discrete time series. The translations are simple in both directions, with reasonable time and size blow-ups. We receive similar results for the logic SC due to Theorem 2.1. Interesting future directions involve investigating extensions with randomization as well as studying the effects of using alternatives to floating-point numbers, such as, for example, fixed-point arithmetic.
|
2303.12164 | Viscoelastic Constitutive Artificial Neural Networks (vCANNs) $-$ a
framework for data-driven anisotropic nonlinear finite viscoelasticity | The constitutive behavior of polymeric materials is often modeled by finite
linear viscoelastic (FLV) or quasi-linear viscoelastic (QLV) models. These
popular models are simplifications that typically cannot accurately capture the
nonlinear viscoelastic behavior of materials. For example, the success of
attempts to capture strain rate-dependent behavior has been limited so far. To
overcome this problem, we introduce viscoelastic Constitutive Artificial Neural
Networks (vCANNs), a novel physics-informed machine learning framework for
anisotropic nonlinear viscoelasticity at finite strains. vCANNs rely on the
concept of generalized Maxwell models enhanced with nonlinear strain
(rate)-dependent properties represented by neural networks. The flexibility of
vCANNs enables them to automatically identify accurate and sparse constitutive
models of a broad range of materials. To test vCANNs, we trained them on
stress-strain data from Polyvinyl Butyral, the electro-active polymers VHB 4910
and 4905, and a biological tissue, the rectus abdominis muscle. Different
loading conditions were considered, including relaxation tests, cyclic
tension-compression tests, and blast loads. We demonstrate that vCANNs can
learn to capture the behavior of all these materials accurately and
computationally efficiently without human guidance. | Kian P. Abdolazizi, Kevin Linka, Christian J. Cyron | 2023-03-21T19:45:59Z | http://arxiv.org/abs/2303.12164v1 | Viscoelastic Constitutive Artificial Neural Networks (vCANNs) - a framework for data-driven anisotropic nonlinear finite viscoelasticity
###### Abstract
The constitutive behavior of polymeric materials is often modeled by finite linear viscoelastic (FLV) or quasilinear viscoelastic (QLV) models. These popular models are simplifications that typically cannot accurately capture the nonlinear viscoelastic behavior of materials. For example, the success of attempts to capture strain rate-dependent behavior has been limited so far. To overcome this problem, we introduce viscoelastic Constitutive Artificial Neural Networks (vCANNs), a novel physics-informed machine learning framework for anisotropic nonlinear viscoelasticity at finite strains. vCANNs rely on the concept of generalized Maxwell models enhanced with nonlinear strain (rate)-dependent properties represented by neural networks. The flexibility of vCANNs enables them to automatically identify accurate and sparse constitutive models of a broad range of materials. To test vCANNs, we trained them on stress-strain data from Polyvinyl Butyral, the electro-active polymers VHB 4910 and 4905, and a biological tissue, the rectus abdominis muscle. Different loading conditions were considered, including relaxation tests, cyclic tension-compression tests, and blast loads. We demonstrate that vCANNs can learn to capture the behavior of all these materials accurately and computationally efficiently without human guidance.
keywords: Nonlinear viscoelasticity, Deep learning, Data-driven mechanics, Physics-informed machine learning, Constitutive modeling, Soft materials +
Footnote †: journal:
## 1 Introduction
Many important materials, such as elastomers or soft biological tissues, undergo large deformations and exhibit nonlinear viscoelasticity behavior. Biological tissues additionally typically exhibit a pronounced anisotropy. Numerous experiments have confirmed that elastomers [1; 2; 3] and similarly ligaments and tendons exhibit nonlinear viscoelasticity [4; 5; 6; 7; 8; 9; 10; 11]. In the past, many constitutive models have been proposed to characterize these materials. However, selecting an appropriate model and identifying its material parameters requires expert knowledge. Further, when selecting a model, one usually has to make a compromise between its computational efficiency and its ability to capture nonlinear viscoelasticity adequately. Therefore, this contribution aims to develop a data-driven framework that automatically discovers constitutive models for anisotropic nonlinear viscoelasticity at finite strains. Ideally, the framework is simple and numerically efficient but, at the same time, highly versatile, describing a wide range of materials. As a starting point for our development, we review existing modeling approaches and identify their advantages and limitations.
Broadly, existing approaches to describe viscoelasticity can be categorized into hereditary integral and internal variables models. In hereditary integral models, the viscoelastic stress response is calculated by the convolution of the deformation history and an appropriate kernel function [12; 13; 14]. A potential difficulty of these models is that, in particular for multiple integral models, the experimental determination of the kernel functions can be cumbersome [15; 16] and very sensitive to noise [17]. Also, the numerical implementation is often challenging since one must account - in general - for the whole deformation history. Therefore, hereditary models have mainly been applied to simple one-dimensional problems and have had only a limited impact on finite element (FE) analysis. An important exception is the theory of quasi-linear viscoelasticity (QLV) [18]. In QLV, the integrand of the hereditary integral is the product of a time-dependent reduced relaxation function and
the rate of the instantaneous elastic stress, usually derived from a hyperelastic strain energy function. Often, the reduced relaxation function is represented by a Prony series [19]. Due to its computational efficiency and a large number of candidate functions for the reduced relaxation function and the instantaneous elastic stress, this approach has been used frequently [4, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30]. Due to the linear relationship between the reduced relaxation function and the instantaneous elastic stress within the hereditary integral, QLV falls short of representing fully general nonlinear viscoelastic behavior. In fact, normalized relaxation curves predicted by QLV have the same shape, independent of the strain, which contradicts experimental observations.
Within the group of internal variable models, two model families have been particularly successful. The first family follows [31] and assumes an additive split of the stress into an equilibrium part and \(n\) non-equilibrium overstresses, in analogy to the generalized Maxwell model. The number of overstresses is arbitrary and can be independently selected for isotropic and anisotropic contributions to the overall material behavior. Linear ordinary differential equations (ODEs) with constant coefficients govern the evolution of the overstresses, serving as internal variables. Closed-form solutions of the evolution equations by convolution integrals result in efficient time integration algorithms [32]. Therefore, these models appear in many commercial FE codes [33]. Due to the linear evolution equations, models of this family are theoretically restricted to finite linear viscoelasticity (FLV), i.e., finite strains but small perturbations away from the thermodynamic equilibrium. Although originating from different theories, FLV and QLV are similar [34, 35]. Thus, FLV suffers from the same limitations as QLV, failing to represent general fully nonlinear viscoelastic behavior such as strain (rate)-dependent viscous properties. [36] and [37] attempted to account for nonlinear viscoelastic effects by choosing strain-dependent coefficients of the evolution equation. These attempts led to improved but still not yet fully satisfactory results. FLV models have, for example, been employed in [37, 38, 39, 40, 41, 42]. The second model family describes finite nonlinear viscoelasticity (FNLV), i.e., finite strains and finite perturbations away from the thermodynamic equilibrium [43]. Motivated by the decomposition of the strain into an elastic and viscous part in the theory of linear viscoelasticity, FNLV models are based on the multiplicative decomposition of the deformation gradient into an elastic and viscous part proposed by [44]. In general, nonlinear ODEs govern the evolution of the viscous part of the deformation gradient, serving as an internal variable. In analogy to the generalized Maxwell model, multiple decompositions of the deformation gradient are possible, each associated with the non-equilibrium stress of a Maxwell element [43] and an associated internal variable (representing a viscous part of the deformation gradient). Applications are documented for rubber [1, 2, 33, 45, 46, 47] and soft biological tissues [48, 49, 50, 51, 52, 53, 54]. The drawbacks of FNLV models are the computational cost, especially for large-scale simulations, and their limited availability in widely used commercial FE software.
The above models have been developed by specialists, and also the selection and calibration of these models for a specific material typically require some expert knowledge. Data-driven modeling approaches such as machine learning circumvent these problems by providing a flexible computational framework that directly infers constitutive relations from data rather than specifying them a priori [55, 56, 57, 58, 59, 60]. In deep learning, modeling the time-history effects of viscoelasticity requires a neural network for temporal signal processing. Therefore, recurrent neural networks (RNNs) and similar architectures, usually employed for speech recognition or time series prediction, have been used intensively. [61] used an Elman network to model materials with fading memory based on fractional differential equations under cyclic loading conditions. Instead of a material model, [62] applied RNNs to fuzzy data to describe time-dependent material behavior within the finite element method. The inelastic material behavior of rubber-like materials was modeled with RNNs and used in FE simulations by [63]. [64] modeled small-strain viscoelasticity using long short-term memory (LSTM). RNNs compute a material's stress state based on a time window comprising the strain states of \(n\) previous time steps. For FE simulations, this entails a significant increase in computational cost, as the strain states of the previous \(n\) time steps have to be stored for each quadrature point. In [64], LSTMs reacted sensitively to time step sizes and loading cases, deviating from those used during training. Moreover, the width of the time window directly affects the fading memory property [65] of the material and is difficult to determine. The thermo-viscoelastic constitutive behavior of polypropylene was modeled by [66] using a mechanistic/data-driven hybrid approach in which a neural network represented the viscous part of their rheological model. [67] employed full-field strain measurements to calibrate the material parameters of a generalized Maxwell model for isotropic linear viscoelasticity in the small-strain regime. [68] modeled the time-dependent behavior of human brain tissue by QLV. Therein, neural networks learned the constant relaxation coefficients and times of the reduced relaxation function. A purely data-driven approach to constitutive modeling, which does not require any explicit constitutive models but builds on material data only, was introduced by [69] and recently extended to inelastic materials [70, 71]. However, this approach typically requires a large database to describe the material's mechanical behavior [72].
To overcome the limitations of the above-delineated approaches, at least in part, herein we propose viscoelastic Constitutive Artificial Neural Networks (vCANNs), a novel physics-informed machine learning framework for anisotropic nonlinear viscoelasticity at large strains. vCANNs are based on a generalized Maxwell model, enhanced with nonlinear strain (rate)-dependent relaxation coefficients and relaxation times represented by neural networks. We show that the data-driven nature of vCANNs enables them to identify accurate anisotropic nonlinear viscoelastic constitutive models automatically. The number of Maxwell elements adapts automatically during the training, promoting a sparse model through \(L_{1}\) regularization. Adopting the computationally very efficient framework of QLV and FLV, we leverage these well-established theories to model anisotropic nonlinear viscoelasticity. The achieved degree of accuracy is unmatched by similar traditional approaches that have been proposed. We trained vCANNs on stress-strain data from Polyvinyl Butyral, the electro-active polymers VHB 4910 and 4905, and the recuts abodominis muscle. Different loading conditions were considered, including relaxation tests, cyclic tension-compression tests, and blast loads. In all these cases, vCANNs were found to be able to learn the behavior of the materials within minutes, without human guidance, and with high accuracy.
## 2 Theory
In this section, we derive the theoretical framework of vCANNs. They can be considered an extension of CANNs, which were introduced in [72], to anisotropic nonlinear viscoelasticity. Therefore, we initially provide a brief review of CANNs. The section's central part will be devoted to the viscoelastic enhancement of CANNs.
### Describing anisotropic hyperelasticity with generalized structural tensors
Many materials of interest exhibit direction-dependent, i.e., anisotropic, mechanical properties. CANNs provide a physics-informed machine learning framework for anisotropic hyperelasticity to describe these materials in a very general way. To this end, CANNs employ invariant theory, and the concept of generalized structural tensors [73]. A material is called hyperelastic if a strain energy function \(\Psi=\Psi(\mathbf{F})\), depending only on the deformation gradient \(\mathbf{F}\), can describe the material's mechanical behavior [74]. To fulfill the principle of objectivity [75], one usually represents \(\Psi\) in terms of the right Cauchy-Green tensor \(\mathbf{C}=\mathbf{F}^{\mathrm{T}}\mathbf{F}\), i.e., \(\Psi=\Psi(\mathbf{C})\). For an incompressible hyperelastic material (\(\det\mathbf{C}=1\)), the instantaneous elastic 2. Piola-Kirchhoff stress tensor is given by
\[\mathbf{S}^{e}=-p\mathbf{C}^{-1}+2\frac{\partial\Psi}{\partial\mathbf{C}}, \tag{1}\]
where \(p\) is a Lagrangian multiplier ensuring incompressibility. The superscript \((\cdot)^{e}\) explicitly distinguishes the instantaneous elastic stress from the viscoelastic stress derived in the next paragraph. In Eq. (1), the first and second terms represent the volumetric and isochoric stress contribution, respectively. The treatment of compressible and nearly compressible materials is equally possible with our proposed framework of anisotropic nonlinear viscoelasticity. For brevity, and because finite-strain viscoelasticity plays a particularly prominent role in materials often modeled as (nearly) incompressible (such as rubber materials or biological tissues), we limit the discussion in the following to incompressible materials.
To describe the mechanical behavior of an anisotropic material, one can define several so-called preferred directions represented by unit direction vectors \(\mathbf{l}_{j}\in\mathbb{R}^{3}\), \(j=1,2,\ldots,J\), and define the following \(J+1\) structural tensors
\[\mathbf{L}_{0}=\frac{1}{3}\mathbf{I},\quad\mathbf{L}_{j}=\mathbf{l}_{j}\otimes\bm {l}_{j},\quad\|\mathbf{l}_{j}\|=1,\quad j=1,2,\ldots,J. \tag{2}\]
Here, \(\mathbf{I}\) denotes the second-order identity tensor, and the associated \(\mathbf{L}_{0}\) is used to describe the isotropic part of the material's constitutive behavior. The preferred directions \(\mathbf{l}_{j}\) can often be interpreted as directions of fiber families embedded in the material. It can be shown that to preserve the material symmetry, the strain energy function \(\Psi\) has to be an isotropic function of the quantities \(\mathbf{C}\) and \(\mathbf{L}_{j}\), \(j=1,2,\ldots,J\)[76]. It can be shown [77; 78] that this is the case if the strain energy function depends only on the following invariants:
\[\mathrm{tr}\,\mathbf{C},\quad\mathrm{tr}\,\mathbf{C}^{2},\quad\mathrm{tr}\, \mathbf{C}^{3},\quad\mathrm{tr}\left(\mathbf{CL}_{j}\right),\quad\mathrm{tr} \left(\mathbf{C}^{2}\mathbf{L}_{j}\right),\quad j=1,2,\ldots,J, \tag{3}\]
\[\mathrm{tr}\left(\mathbf{CL}_{i}\mathbf{L}_{j}\right),\quad\mathrm{tr}\left( \mathbf{L}_{i}\mathbf{L}_{j}\right),\quad\mathrm{tr}\left(\mathbf{L}_{i} \mathbf{L}_{j}\mathbf{L}_{k}\right),\quad 1\leq i<j<k\leq J. \tag{4}\]
The latter two types of invariants in Eq. (4) are constant and can therefore be omitted from the arguments of \(\Psi\). For practical applications, the influence of the first invariant type in Eq. (4) is usually negligible. Therefore,
\(\Psi\) can commonly be expressed as
\[\Psi=\Psi\left(\operatorname{tr}\mathbf{C},\operatorname{tr}\mathbf{C}^{2}, \operatorname{tr}\mathbf{C}^{3},\operatorname{tr}\left(\mathbf{CL}_{1}\right), \operatorname{tr}\left(\mathbf{C}^{2}\mathbf{L}_{1}\right),\ldots, \operatorname{tr}\left(\mathbf{CL}_{J}\right),\operatorname{tr}\left(\mathbf{C }^{2}\mathbf{L}_{J},\right)\right). \tag{5}\]
With the \(2R+1\) generalized invariants
\[\tilde{I}_{r}=\operatorname{tr}\left(\mathbf{C}\tilde{\mathbf{L}}_{r}\right), \hskip 14.226378pt\tilde{J}_{r}=\operatorname{tr}\left((\det\mathbf{C}) \mathbf{C}^{-\mathrm{T}}\tilde{\mathbf{L}}_{r}\right)=\operatorname{tr}\left( (\operatorname{cof}\mathbf{C})\tilde{\mathbf{L}}_{r}\right),\hskip 14.226378pt \text{III}_{\mathbf{C}}=\det\mathbf{C},\hskip 14.226378ptr=1,2,\ldots,R, \tag{6}\]
relying on the \(R\) generalized structural tensors
\[\tilde{\mathbf{L}}_{r}=\sum_{j=0}^{J_{R}}w_{rj}\mathbf{L}_{rj},\quad r=1,2, \ldots,R, \tag{7}\]
where
\[\mathbf{L}_{r0}=\mathbf{L}_{0},\quad\sum_{j=0}^{J_{R}}w_{rj}=1,\quad w_{rj} \geq 0,\quad r=1,2,\ldots,R, \tag{8}\]
and employing the short-hand notation
\[\tilde{\mathcal{I}}=\left\{\tilde{I}_{1},\tilde{J}_{1},\ldots,\tilde{I}_{R}, \tilde{J}_{R},\text{III}_{\mathbf{C}}\right\}, \tag{9}\]
we can alternatively express Eq. (5), according to [73], in the form
\[\Psi=\Psi\left(\tilde{\mathcal{I}}\right). \tag{10}\]
The generalized structural tensors represent linear combinations of the standard structural tensors \(\mathbf{L}_{rj}=\boldsymbol{l}_{rj}\otimes\boldsymbol{l}_{rj}\) introduced in Eq. (2). We use a double index \(rj\) to emphasize that, in principle, each generalized structural tensor \(\tilde{\mathbf{L}}_{r}\) can rely on a different subset of \(J_{r}\) preferred material directions \(\boldsymbol{l}_{rj},j=1,\ldots,J_{r}\).
In order to describe not only the stress-strain behavior of a material but also the dependence of this behavior on certain in general non-mechanical parameters, it is convenient to augment the arguments of \(\Psi\) with a feature vector \(\mathbf{f}=\left[f_{1},f_{2},\cdots,f_{N_{f}}\right]^{\mathrm{T}}\), where \(N_{f}\) denotes the number of features. For example, \(\mathbf{f}\) could carry information on the material's microstructure or production process. Thus,
\[\Psi=\Psi\left(\tilde{\mathcal{I}},\mathbf{f}\right). \tag{11}\]
Apart from material symmetry and the principle of objectivity, the strain energy function has to fulfill several other conditions. The strain energy must always be positive, \(\Psi\geq 0\). Also, the strain energy is required to approach infinity if the material is shrunk to zero or expanded to infinite volume, i.e., \(\Psi\to\infty\) for \(\det\mathbf{C}\to\infty\) or \(\det\mathbf{C}\to 0^{+}\), which is called the growth condition. If a stress-free reference configuration is assumed, the strain energy function and the stress have to fulfill the normalization condition: \(\Psi(\mathbf{C}=\mathbf{I})=0\) and \(\mathbf{S}^{e}(\mathbf{C}=\mathbf{I})=-p\mathbf{I}+2\frac{\partial\Psi}{ \partial\mathbf{C}}\big{|}_{\mathbf{C}=\mathbf{I}}=\mathbf{0}\). Inserting Eq. (11) in Eq. (1) yields
\[\mathbf{S}^{e}=-p\mathbf{C}^{-1}+\sum_{r=1}^{R}\underbrace{2\left(\frac{ \partial\Psi}{\partial\tilde{I}_{r}}\tilde{\mathbf{L}}_{r}-\frac{\partial \Psi}{\partial\tilde{J}_{r}}\mathbf{C}^{-1}\tilde{\mathbf{L}}_{r}\mathbf{C}^ {-1}\right)}_{=\mathbf{S}^{e}_{r}}=-p\mathbf{C}^{-1}+\sum_{r=1}^{R}\,\mathbf{S} ^{e}_{r}. \tag{12}\]
Nowadays, engineers can choose from a vast catalog of strain energy functions to model materials. Choosing a suitable strain energy function, however, typically requires expert knowledge. To overcome this problem, CANNs introduced a particularly efficient machine learning architecture to learn the relation between the argument in Eq. (11) and the resulting strain energy \(\Psi\). Basing CANNs on Eq. (11) endows them with substantial prior knowledge from materials theory, namely, the theory of generalized invariants. This prior knowledge significantly reduces the amount of training data CANNs need to learn the constitutive behavior of a specific material of interest. At the same time, given the generality of the theory of generalized invariants, using Eq. (11) as a basis does not limit the generality of CANNs in any practically relevant way. Rather the underlying neural network equips the constitutive model with the flexibility to adjust to experimental data from various materials without
human guidance. In particular, the preferred material directions \(\mathbf{l}_{j}\) and the scalar weight factors \(w_{rj}\) in Eq. (2) and Eq. (7) are learned by the CANN from the available material data. In the following, we extend this concept to anisotropic nonlinear viscoelasticity.
### Viscoelasticity
According to [18], in QLV, the viscoelastic 2. Piola-Kirchhoff stress tensor is given by the hereditary integral
\[\mathbf{S}(t)=\int_{-\infty}^{t}\mathbb{G}(t-s):\dot{\mathbf{S}}^{e}\mathrm{d}s, \tag{13}\]
where \(\mathbb{G}(t)\) is the time-dependent fourth-order reduced relaxation function tensor. \(\dot{\mathbf{S}}^{e}\) is the material time derivative of instantaneous elastic 2. Piola-Kirchhoff stress tensor, i.e., \(\dot{\mathbf{S}}^{e}=\frac{\mathrm{d}\mathbf{S}^{e}}{\mathrm{d}t}\), where \(\mathbf{S}^{e}\) is computed according Eq. (12).
The fundamental assumption of QLV is that \(\mathbb{G}(t)\) depends only on time but not the deformation (time-deformation separability) such that the relaxation behavior is for any applied deformation the same. To overcome this limitation, we allow \(\mathbb{G}\) to depend on the deformation \(\mathbf{C}\). At this point, we go beyond the framework of classical QLV because \(\mathbb{G}\) is not only time-dependent anymore. We add the deformation rate \(\dot{\mathbf{C}}\) to the arguments of \(\mathbb{G}\) since many materials show not only strain-dependent but also strain rate-dependent viscoelastic behavior [2; 79]:
\[\mathbf{S}(t)=\int_{-\infty}^{t}\mathbb{G}(t-s;\mathbf{C},\dot{\mathbf{C}}): \dot{\mathbf{S}}^{e}\mathrm{d}s. \tag{14}\]
In the simplest case, \(\mathbb{G}(t)=G(t)\mathbb{I}\) where \(G(t)\) denotes a scalar reduced relaxation function and \(\mathbb{I}\) the fourth-order identity tensor. However, anisotropic materials may exhibit different viscous properties in different directions. Therefore, a single scalar reduced relaxation function would, in general, be insufficient to capture the complex nature of anisotropic viscoelastic materials. On the other hand, the experimental identification of a fourth-order reduced relaxation function tensor is highly challenging, even for simple classes of anisotropy, and is practically often unfeasible for complex classes. A reasonable compromise between practicability and generality of the constitutive model is to use a scalar-valued reduced relaxation function \(G_{r}\) for each stress contribution \(\mathbf{S}_{r}^{e}\) in Eq. (12). Additionally, we augment the arguments of the reduced relaxation with the structural tensors to account for anisotropy:
\[\mathbf{S}(t)=-p\mathbf{C}^{-1}+\sum_{r=1}^{R}\int_{-\infty}^{t}G_{r}(t-s; \mathbf{C},\dot{\mathbf{C}},\mathbf{L}_{1},\mathbf{L}_{2},\ldots,\mathbf{L}_{ J})\;\dot{\mathbf{S}}_{r}^{e}\;\mathrm{d}s. \tag{15}\]
Experiments suggest that in many rubber materials and soft biological tissues, the viscous effects mostly attribute to the isochoric part of the stress [80]. In the incompressible limit, this holds exactly [81]. Therefore, in Eq. (15), the reduced relaxation functions \(G_{r}\) affect only the isochoric part of the stress.
The reduced relaxation functions \(G_{r}\) are scalar-valued functions of tensors. To fulfill the principle of material objectivity and to reflect the material symmetry correctly, the reduced relaxation functions have to be isotropic functions of the tensor system \(\{\mathbf{C},\dot{\mathbf{C}},\mathbf{L}_{1},\mathbf{L}_{2},\ldots,\mathbf{L}_ {J}\}\)[76]. Compared to Eqs. (3) and (4), the set of isotropic invariants, in terms of which all other isotropic functions can be expressed, is completed by [77; 78],
\[\mathrm{tr}\,\dot{\mathbf{C}},\quad\mathrm{tr}\,\dot{\mathbf{C}}^{2},\quad \mathrm{tr}\,\dot{\mathbf{C}}^{3},\quad\mathrm{tr}\left(\dot{\mathbf{C}} \mathbf{L}_{j}\right),\quad\mathrm{tr}\left(\dot{\mathbf{C}}^{2}\mathbf{L}_{ j}\right),\quad j=1,2,\ldots,J, \tag{16}\]
\[\mathrm{tr}\left(\dot{\mathbf{C}}\mathbf{L}_{i}\mathbf{L}_{j}\right),\quad \mathrm{tr}\left(\mathbf{L}_{i}\mathbf{L}_{j}\right),\quad\mathrm{tr}\left( \mathbf{L}_{i}\mathbf{L}_{j}\mathbf{L}_{k}\right),\quad 1\leq i<j<k\leq J, \tag{17}\]
\[\mathrm{tr}\left(\mathbf{C}\dot{\mathbf{C}}\right),\quad\mathrm{tr}\left( \mathbf{C}^{2}\dot{\mathbf{C}}\right),\quad\mathrm{tr}\left(\mathbf{C}\dot{ \mathbf{C}}^{2}\right),\quad\mathrm{tr}\left(\mathbf{C}^{2}\dot{\mathbf{C}}^ {2}\right),\quad\mathrm{tr}\left(\mathbf{C}\dot{\mathbf{C}}\mathbf{L}_{j} \right),\quad j=1,2,\ldots,J. \tag{18}\]
Following the same arguments as before, we omit the invariants in Eqs. (17) and (18) yielding
\[G_{r}=G_{r}\Big{(}t;\mathrm{tr}\,\mathbf{C},\mathrm{tr}\,\mathbf{ C}^{2},\mathrm{tr}\,\mathbf{C}^{3},\mathrm{tr}\left(\mathbf{CL}_{1}\right), \mathrm{tr}\left(\mathbf{C}^{2}\mathbf{L}_{1}\right),\ldots,\mathrm{tr}\left( \mathbf{CL}_{J}\right),\mathrm{tr}\left(\mathbf{C}^{2}\mathbf{L}_{J}, \right),\\ \mathrm{tr}\,\dot{\mathbf{C}},\mathrm{tr}\,\dot{\mathbf{C}}^{2}, \mathrm{tr}\,\dot{\mathbf{C}}^{3},\ldots,\mathrm{tr}\left(\dot{\mathbf{C}} \mathbf{L}_{1}\right),\mathrm{tr}\left(\dot{\mathbf{C}}^{2}\mathbf{L}_{1}\right),\mathrm{tr}\left(\dot{\mathbf{C}}\mathbf{L}_{J}\right),\mathrm{tr}\left(\dot{ \mathbf{C}}^{2}\mathbf{L}_{J}\right)\Big{)}. \tag{19}\]
By introducing the \(2R+1\) generalized invariants
\[\tilde{I}_{r}=\mathrm{tr}\left(\dot{\mathbf{C}}\tilde{\mathbf{L}}_{r}\right), \qquad\tilde{J}_{r}=\mathrm{tr}\left((\det\dot{\mathbf{C}})\dot{\mathbf{C}}^{- \mathrm{T}}\tilde{\mathbf{L}}_{r}\right)=\mathrm{tr}\left((\mathrm{cof}\,\dot{ \mathbf{C}})\tilde{\mathbf{L}}_{r}\right),\qquad\mathrm{III}_{\mathbf{C}}=\det \dot{\mathbf{C}},\qquad r=1,2,\ldots,R \tag{20}\]
and the short-hand notations
\[\tilde{\dot{\mathcal{I}}} =\left\{\tilde{I}_{1},\tilde{J}_{1},\ldots,\tilde{I}_{R},\tilde{ J}_{R},\mathrm{III}_{\mathbf{C}}\right\}, \mathcal{I} =\tilde{\mathcal{I}}\cup\tilde{\mathcal{I}}, \tag{21}\]
we can express Eq. (19) alternatively by
\[G_{r}=G_{r}\left(t;\mathcal{I}\right). \tag{22}\]
Finally, we augment the arguments of the reduced relaxation function with the above-introduced feature vector \(\mathbf{f}\):
\[G_{r}=G_{r}\left(t;\mathcal{I},\mathbf{f}\right). \tag{23}\]
From Eq. (15), we obtain the 2. Piola-Kirchhoff stress tensor
\[\mathbf{S}(t)=-p\mathbf{C}^{-1}+\sum_{r=1}^{R}\int_{-\infty}^{t}G_{r}(t-s; \mathcal{I},\mathbf{f})\;\dot{\mathbf{S}}_{r}^{e}\,\mathrm{d}s. \tag{24}\]
_Prony Series._ Motivated by linear viscoelasticity and the generalized Maxwell model (Fig. 1), the most popular choice for the reduced relaxation function \(G\) in QLV is the discrete Prony series
\[G(t)=g_{\infty}+\sum_{\alpha=1}^{N}g_{\alpha}\exp\left(-\frac{t}{\tau_{\alpha}}\right) \tag{25}\]
with
\[g_{\infty}+\sum_{\alpha=1}^{N}g_{\alpha} =1, 0\leq g_{\infty},g_{\alpha}\leq 1, \tau_{\alpha}>0. \tag{26}\]
Here, \(g_{\infty}\) is a material parameter related to the equilibrium elasticity of the generalized Maxwell model, and \(g_{\alpha}\) and \(\tau_{\alpha}\) are parameters characterizing elasticity and viscous relaxation time of the \(\alpha\)-th Maxwell element. The \(g_{\infty}\) and \(g_{\alpha}\) are referred to as relaxation coefficients. In principle, the number of Maxwell elements \(N\) is arbitrary, which enables the model to describe complex viscoelastic materials. Since the material parameters \(g_{\infty}\), \(g_{\alpha}\), and \(\tau_{\alpha}\) are constants, the classical Prony series is limited to linear viscoelasticity.
Figure 1: The generalized Maxwell model: the elastic spring on the left represents the equilibrium stress response \(\mathbf{S}^{\infty}\); each Maxwell element produces a viscous overstress \(\mathbf{Q}_{\alpha}\) and represents a relaxation process with a different relaxation time. \(g_{\infty}\), \(g_{i}\), and \(\tau_{i}\) are constant material parameters or deformation (rate)-dependent functions.
Generalized Prony SeriesThe classical Prony series does not depend on the deformation or the deformation rate but on time only. Therefore, to account for nonlinear viscoelasticity, we propose the following generalized Prony series
\[G_{r}=G_{r}(t;\mathcal{I},\mathbf{f})=g_{r\infty}(\mathcal{I},\mathbf{f})+\sum_{ \alpha=1}^{N_{r}}g_{r\alpha}(\mathcal{I},\mathbf{f})\exp\left(-\frac{t}{\tau_{ r\alpha}(\mathcal{I},\mathbf{f})}\right). \tag{27}\]
The conditions Eq. (26) individually apply to the relaxation coefficients \(g_{r\infty}(\mathcal{I},\mathbf{f})\), \(g_{r\alpha}(\mathcal{I},\mathbf{f})\) and the relaxation times \(\tau_{r\alpha}(\mathcal{I},\mathbf{f})\) associated with the instantaneous elastic stress component \(\mathbf{S}_{r}^{e}\). \(N_{r}\) denotes the number of Maxwell branches of the generalized Maxwell model associated with the instantaneous elastic stress component \(\mathbf{S}_{r}^{e}\). In contrast to the classical Prony series, the relaxation coefficients and times in Eq. (25) are functions of the invariants \(\mathcal{I}\) and the feature vector \(\mathbf{f}\) to capture also anisotropic nonlinear viscoelasticity.
Inserting Eq. (27) into Eq. (24) yields
\[\mathbf{S}(t) =-p\mathbf{C}^{-1}+\sum_{r=1}^{R}\left[\mathbf{S}_{r}^{\infty}+ \sum_{\alpha=1}^{N_{r}}\underbrace{\int_{-\infty}^{t}g_{r\alpha}(\mathcal{I}, \mathbf{f})\exp\left(-\frac{t-s}{\tau_{r\alpha}(\mathcal{I},\mathbf{f})} \right)\dot{\mathbf{S}}_{r}^{e}\,\mathrm{d}s}_{=\dot{\mathbf{Q}}_{r\alpha}}\right] \tag{28}\] \[=-p\mathbf{C}^{-1}+\sum_{r=1}^{R}\left[\mathbf{S}_{r}^{\infty}+ \sum_{\alpha=1}^{N_{r}}\mathbf{Q}_{r\alpha}\right] \tag{29}\]
where \(\mathbf{S}_{r}^{\infty}=g_{r\infty}(\mathcal{I},\mathbf{f})\,\mathbf{S}_{r}^{e}\) denotes the equilibrium stress associated with the \(r\)-th generalized Maxwell model. \(\mathbf{Q}_{r\alpha}\) is the viscous overstress in the \(\alpha\)-th Maxwell branch of the \(r\)-th generalized Maxwell model. To illustrate the proposed constitutive model, we particularized a vCANN for the important case of transverse isotropy in Appendix A. In general, closed-form solutions do not exist for the integrals in Eq. (28) so that a numerical time integration scheme has to be applied. Details are provided in Appendix B.
## 3 Machine learning architecture
### General
In the previous section, we outlined the theoretical foundations of the model of nonlinear viscoelasticity on which we rely in this paper. The main idea of vCANNs is to implement this theory via a machine learning architecture. This architecture is illustrated in Fig. 2. In our approach, we use feedforward neural networks (FFNNs). The networks consist of \(H+1\) layers, that is \(H\) hidden layers, of neurons. The input passed to the first layer is a vector \(\boldsymbol{x}_{0}\in\mathbb{R}^{n_{0}}\). The output of the \(l\)-th layer is denoted by \(\boldsymbol{x}_{l}\in\mathbb{R}^{n_{l}}\), respectively, and computed as
\[\boldsymbol{x}_{l}=\sigma_{l}\left(\boldsymbol{W}^{(l)}\boldsymbol{x}_{l-1}+ \boldsymbol{b}^{(l)}\right),\quad l=1,\ldots,H+1,\quad\boldsymbol{x}_{l}\in \mathbb{R}^{n_{l}}, \tag{30}\]
with the activation function \(\sigma_{l}(\cdot)\) of layer \(l\), weights \(\boldsymbol{W}^{(l)}\in\mathbb{R}^{n_{l}\times n_{l-1}}\) of layer \(l\), and biases \(\boldsymbol{b}^{(l)}\in\mathbb{R}^{n_{l}}\) of layer \(l\). The activation function is applied element-wise to its argument. The output of the last layer (and thus the output of the network altogether) is \(\boldsymbol{x}_{H+1}\in\mathbb{R}^{n_{H+1}}\). Mathematically, an FFNN with \(H\) hidden layers establishes a mapping \(\mathcal{N}:\mathbb{R}^{n_{0}}\rightarrow\mathbb{R}^{n_{H+1}}\), \(\boldsymbol{x}_{H+1}=\mathcal{N}(\boldsymbol{x}_{0})\).
Applying the model of nonlinear viscoelasticity outlined in Sec. 2 to compute the stress at each point in time (depending on the strain history) requires implementing Eq. (15). To evaluate this equation for a given strain history, we need to define the following functions: the strain energy \(\Psi\) and the reduced relaxation functions \(G_{r}\).
### Strain energy
To define the strain energy, we use a CANN [72] relying on the generalized invariants of the type \(\tilde{\mathcal{I}}\). Stresses can be computed by automatic differentiation. The CANN automatically ensures material objectivity, material symmetry, and an energy- and stress-free reference configuration, i.e., \(\Psi(\mathbf{C}=\mathbf{I})=0\) and \(\mathbf{S}(\mathbf{C}=\mathbf{I})=\mathbf{0}\). The latter is ensured by a term in the strain energy that is continuously adopted during machine learning such that these two conditions remain satisfied. Non-negativeness of the strain energy function, i.e., \(\Psi\geq 0\), is ensured by choosing appropriate activation functions and weight constraints in the last two layers of the CANN. In the second-to-last layer, we apply non-negative activation functions \(\sigma_{H}:\mathbb{R}\rightarrow\mathbb{R}^{+0}\). In the last layer, we apply a linear activation function \(\sigma_{H+1}(\boldsymbol{x})=\boldsymbol{x}\) and enforce non-negative weights and biases \((\boldsymbol{W}^{(H+1)},\boldsymbol{b}^{(H+1)}\geq 0)\)
yielding a non-negative strain energy function. Except for the last layer, where we apply a linear activation function, our default activation function is the softplus function \(\sigma_{l}(x)=\ln(1+\exp(x))\). Apart from the useful property of being positive, the softplus function is a \(C^{\infty}\)-continuous function. Hence, the strain energy function, stress tensor, and elasticity tensor are \(C^{\infty}\)-continuous functions which is numerically favorable, particularly for implementing the vCANN in FE software.
Note that, if necessary, we can easily guarantee polyconvexity [82] of the strain energy function when using CANNs. To this end, we enforce non-negative weights in all layers and a non-negative bias in the last layer (but not necessarily in the previous layers [83]). Then, applying convex, non-decreasing activation functions on all layers renders the network convex [84]. We meet this constraint on the activation functions by default since we use linear activation functions in the last layer and softplus activation functions in all other layers. Both activation functions are convex and non-decreasing. Using a neural network with the above features, we only have to ensure that only polyconvex invariants are used in the CANN. In particular, the generalized invariants of the type \(\tilde{\mathcal{I}}\) are polyconvex [73]. For an overview of other polyconvex invariants, the reader is referred to [85; 86; 87].
Figure 2: Schematic illustration of the vCANN architecture: The strain (rate) tensors \(\mathbf{C}\), \(\dot{\mathbf{C}}\), and the feature vector \(\mathbf{f}\) serve as input to the structure learning block (Fig. C.12) which learns the generalized invariants \(\tilde{\mathcal{I}}\) and \(\tilde{\mathcal{I}}\). The generalized invariants and the feature vector are fed to the relaxation time ANNs \(\mathcal{N}_{\tau_{\alpha}}\) and relaxation coefficient ANNs \(\mathcal{N}_{\mathcal{B}\tau_{\alpha}}\). The outputs of \(\mathcal{N}_{\tau_{\alpha}}\) are the relaxation times \(\tau_{\tau\alpha}\). The neural networks \(\mathcal{N}_{\mathcal{B}\tau_{\alpha}}\) calculate the relaxation coefficients \(g_{\tau\alpha}\) which are regularized to promote a sparse model (grey box ‘\(L_{1}\) regularization’). The reduced relaxation function \(G_{r}\) is obtained by inserting \(\tau_{\tau\alpha}\) and \(g_{\tau\alpha}\) in Eq. (27). The generalized invariants \(\tilde{\mathcal{I}}\) and \(\mathbf{f}\) are fed to the CANN which calculates the instantaneous elastic stress contributions \(\mathbf{S}_{r}^{c}\). The internal structure of the CANN is depicted in Fig. 1(a) in [72]. With Eq. (28), we finally calculate the viscoelastic stress \(\mathbf{S}\). Note that Fig. C.12 is slightly modified compared to Fig. 1(b) in [72] since in vCANNs one has to account for \(\dot{\mathbf{C}}\) as an input, too.
### Reduced relaxation functions
To define the reduced relaxation functions \(G_{r}\) in Eq. (27), we must define strain (rate)-dependent relaxation times and relaxation coefficients. To this end, we represent the unknown relation between the strain (rate) and these parameters by FFNNs. We employ separate FFNNs for each relaxation time and coefficient such that each neural network can focus on a particularly simple task. The constraints on the reduced relaxation functions, (26), are less severe than those on the strain energy functions. The positivity of the relaxation times and coefficients in Eq. (26)\({}_{2,3}\) is guaranteed by the same methods described above for the strain energy function. The unity constraint on the relaxation coefficients, Eq. (26)\({}_{1}\), is enforced by a custom normalization layer. Apart from that, the functional relations providing the sought parameters are not restricted. Note that using the invariant basis \(\mathcal{I}\) as input, the relaxation times and coefficients automatically ensure material objectivity and material symmetry.
The number of Maxwell elements in the generalized Prony series is an important parameter. With sufficiently many Maxwell elements, a Prony series can describe arbitrarily complex viscoelastic materials. Often, materials exhibit numerous different relaxation times [88]. A generalized Maxwell model represents each relaxation time by a different Maxwell element. Following these arguments, choosing a large number of Maxwell elements is preferable to represent the relaxation behavior as accurately as possible. However, the model complexity, and thus the computational cost, increases with the number of Maxwell elements. Moreover, complex models with many parameters tend to overfit the experimental data, thereby losing the ability to generalize beyond specific given training data. From that perspective, keeping the number of relaxation times and coefficients small is favorable. To balance between an accurate representation of data and a low model complexity, we proceed as follows.
Identifying the relaxation coefficients and times of a generalized Maxwell model is known to be an ill-posed problem [89]. Therefore, in classical approaches, the number of Maxwell elements \(N_{r}\) is determined beforehand and fixed during the parameter identification process [90]. In some approaches, the number of Maxwell elements and the relaxation times are determined a priori and fixed during parameter identification to remove ill-posedness [89]. In our approach, we predefine a maximum number of Maxwell elements \(N_{r}^{max}\). During the training, the actual number \(N_{r}\) of Maxwell elements is determined by the vCANN as a part of the learning process within the allowed range \([1;N_{r}^{max}]\). To avoid unnecessary constraints to the learning process, one may choose relatively large values for \(N_{r}^{max}\), which typically result after the training in \(N_{r}\ll N_{r}^{max}\).
The literature shows that the relaxation times of viscoelastic materials are typically uniformly distributed on a logarithmic scale [91]. To endow our machine learning architecture with this heuristic prior knowledge, we normalized the output of the \(N_{r}^{max}\) FFNNs that learned the relaxation times by time constants \(T_{r\alpha}\), \(\alpha=1,2,\ldots,N_{r}^{max}\). These time constants were uniformly distributed on a logarithmic scale in some range \([T_{min},T_{max}]\). \(T_{min}\) and \(T_{max}\) are parameters the user can initially define based on prior knowledge or heuristic expectations. It is important to underline that the normalizing constants \(T_{r\alpha}\) are not the relaxation times of our model. The vCANN can, and will in general, learn relaxation times \(\tau_{r\alpha}\) (possibly even considerably) differing from the \(T_{r\alpha}\). Yet, the \(T_{r\alpha}\) provide via the normalization of the output of the FFNNs some bias regarding the expected time scales for the different relaxation times, which can significantly accelerate the training if \([T_{min},T_{max}]\) is properly chosen.
Initially, we prescribe the maximum number of Maxwell elements \(N_{r}^{max}\), typically much larger than the actual number \(N_{r}\) required to accurately describe the viscoelastic material. This allows us to gradually eliminate Maxwell elements during training to obtain a sparse model. This approach is similar to the one of [92; 93], where the number of Maxwell elements was adjusted by merging or removing them during parameter identification to avoid ill-posedness and improve the fit. Likewise, [94] proposed to apply Tikhonov-regularization [95] to the material parameters and subsequently cluster Maxwell elements with similar relaxation times. We decided to promote sparsity of our model by applying \(L_{1}\) regularization to the relaxation coefficients \(g_{r\alpha}\), \(\alpha=1,2,\ldots,N_{r}^{max}\). Using \(L_{1}\) regularization, the optimal value for some relaxation coefficients will be zero, eliminating the corresponding Maxwell elements. In this approach, one uses a penalty parameter \(\Lambda\) controlling the sparsity of the model. Choosing \(\Lambda=0\) disables regularization, whereas with increasing \(\Lambda\), the sparsity of the model increases, too. The penalty parameter \(\Lambda\) is a hyperparameter that has to be predefined (and possibly iteratively optimized).
In summary, relaxation times and coefficients, as functions of the invariants \(\mathcal{I}\) and the feature vector \(\mathbf{f}\), are learned by individual FFNNs. Scaling of the FFNNs determining the relaxation times by predefined logarithmically uniformly spaced constants introduces a bias in agreement with the literature findings that can help accelerate the training process. \(L_{1}\) regularization on the relaxation coefficients promotes sparse reduced
relaxation functions. We implemented the complete vCANN framework using the open-source software library Keras with TensorFlow backend [96; 97].
## 4 Results
In this section, we apply vCANNs to various data sets. We use synthetic as well as experimental data. In [72], we already demonstrated that CANNs could successfully learn the preferred material directions \(\mathbf{l}_{rj}\) and the scalar weight factors \(w_{rj}\) in Eq. (2). Therefore, for simplicity, we herein assume them to be known to focus on this paper's main problem, viscoelastic relaxation. We list the corresponding vCANNs and their hyperparameters in Appendix E for each of the following examples. There, we also provide additional information on the training procedure.
### Anisotropic viscoelasticity with synthetic data
We created synthetic training data to mimic stress-strain data of viscoelastic soft biological tissues. To this end, we used two hyperelastic constitutive models popular in biomechanics, the Ogden model [98], and the Holzapfel-Gasser-Ogden (HGO) model [99]. The Ogden model is a phenomenological model for isotropic rubber-like materials and soft biological tissues and is usually formulated in terms of the principal stretches \(\lambda_{i}\), \(i=1,2,3\), which are the square roots of the eigenvalues of \(\mathbf{C}\),
\[\Psi_{\text{OG}}(\mathbf{C})=\sum_{p=1}^{n}\frac{\mu_{p}}{\alpha_{p}}\left( \lambda_{1}^{\alpha_{p}}+\lambda_{2}^{\alpha_{p}}+\lambda_{3}^{\alpha_{p}}-3 \right). \tag{31}\]
In Eq. (31), \(n\) is a positive integer, \(\mu_{p}\) and \(\alpha_{p}\) are (constant) material parameters. Many soft biological tissues exhibit stiffening fibers that induce anisotropy. Therefore, the Ogden model is often combined with the HGO model, which adds an anisotropic contribution to the total strain energy function. The strain energy function of the HGO model, with one preferred material direction \(\mathbf{l}\) and structural tensor \(\mathbf{L}=\mathbf{l}\otimes\mathbf{l}\), is
\[\Psi_{\text{HGO}}(\mathbf{C},\mathbf{L})=\begin{cases}\frac{k_{1}}{2k_{2}} \left\{\exp\left[k_{2}(I_{4}-1)^{2}\right]-1\right\}&\text{for }I_{4}\geq 1,\\ 0&\text{for }I_{4}<1.\end{cases} \tag{32}\]
\(k_{1}\geq 0\) and \(k_{2}>0\) are material parameters, and \(I_{4}=\mathbf{C}:\mathbf{L}\). Since \(I_{4}\) represents the squared fiber stretch, \(I_{4}<1\) means compression of the fibers, which are assumed to bear load under tension only. Thus, for \(I_{4}<I\), the anisotropic strain energy and stress contributions are assumed to be zero.
To produce synthetic data, we used a material model where strain energy was a sum of the Ogden (OG) and HGO strain energy functions, that is,
\[\Psi=\Psi_{\text{OG}}+\Psi_{\text{HGO}}. \tag{33}\]
For the viscous part of the constitutive model used for generating synthetic material data, we assumed strain-dependent relaxation times and coefficients:
\[\tau_{i}^{\text{iso}}(I_{1}) =\hat{\tau}_{a,i}^{\text{iso}}\exp\left(\hat{\tau}_{b,i}^{\text{ iso}}(I_{1}-3)^{2}\right), g_{i}^{\text{iso}}(\lambda) =\hat{g}_{a,i}^{\text{iso}}\exp\left(\hat{g}_{b,i}^{\text{iso}}(I_{1}-3)^{2} \right), i= 1,2 \tag{34}\] \[\tau_{1}^{\text{ani}}(I_{4}) =\hat{\tau}_{a}^{\text{ani}}\exp\left(\hat{\tau}_{b}^{\text{ani}}( I_{4}-1)^{2}\right), g_{1}^{\text{ani}}(I_{4}) =\hat{g}_{a}^{\text{ani}}\exp(\hat{g}_{b}^{\text{ani}}(I_{4}-1)^{2}). \tag{35}\]
Using the material parameters in Tab. D.1 and Tab. D.2 for Eqs. (34) and (35), we simulated uniaxial cyclic tension-compression experiments with relaxation periods between each tension and compression period. After each complete cycle the stretch rate \(\dot{\lambda}\) was changed according to the sequence \(\dot{\lambda}=\{0.02,0.03,0.04,0.05\}\) s\({}^{-1}\). Loading and unloading periods took \(t_{move}=10\) s, respectively. The relaxation periods took \(t_{relax}=60\) s. Thus, a single cycle took \(t_{cyc}=160\) s and the total experiment \(t_{total}=640\) s. Synthetic training data were generated for different preferred directions, characterized by the acute angle \(\varphi\) between the loading and preferred material directions. \(\varphi=0^{\circ}\) means that loading direction and preferred direction \(\mathbf{l}\) are parallel, \(\varphi=90^{\circ}\) means that both are to orthogonal. The synthetic training data comprised stress data from fictitious materials with four different preferred directions corresponding to \(\varphi=\{0,15,20,25\}^{\circ}\).
To validate the model, we generated additional synthetic data for a material with the preferred direction \(\varphi=10^{\circ}\), which is not in the training data set. For this material, we simulated two more loading cycles in addition
to the above-described loading history, such that the vCANN had to extrapolate the stress response temporally. The stretch rate of the cycles changed according to the sequence \(\dot{\lambda}=\{0.01,0.02,0.03,0.04,0.02,0.05\}\) s\({}^{-1}\).
We trained a vCANN with the transversely isotropic structure and hyperparameters given in Appendix E.1. Figures 3 and 4 show that the vCANN learns to replicate the training data almost exactly. In compression, the stress response is similar for all preferred directions since only the isotropic matrix of the composite (Ogden model) bears the load. The vCANN replicates this feature accurately. Similarly, the prediction of the validation data set for the unknown preferred direction captures and extrapolates almost perfectly the material response. In particular, the irregular stress response in the time interval \([640,710]\) s, caused by halving the stretch rate, is predicted precisely.
Figure 3: Training (top) and validation (bottom) results. Scatter points represent the synthetic training and validation data, respectively; solid lines represent the vCANN predictions.
### Passive viscoelastic response of the abdominal muscle
We reproduced relaxation responses of the leporine rectus abdominis muscle reported in [37]. The reduced relaxation function's shape depends on the stretch level, thus exhibiting nonlinear viscoelastic behavior. Classical QLV cannot account for this stretch dependency and would predict the same curve for each stretch level. To represent the stretch-dependent relaxation behavior, [37] incorporated stretch-dependent relaxation coefficients and times into a Prony series with one Maxwell element. The authors empirically determined the phenomenological strain dependency of the relaxation coefficients and times. However, their model did not accurately
Figure 4: Training (top) and validation (bottom) results. Scatter points represent the synthetic training and validation data; solid lines represent the vCANN predictions.
capture the reduced relaxation curves despite utilizing optimization algorithms to fit the material parameters to experimental data (cf. Fig. 10 in [37]). This illustrates the limits of human-designed and human-calibrated constitutive models for viscoelastic materials.
In contrast, vCANNs capture the relaxation curves with high accuracy (Fig. 5) otherwise only matched by much more complex FNLV models based on the multiplicative split of the deformation gradient, see Fig. 5 in [53] for comparison on the same experimental data set. We started the training with \(N_{max}=10\) Maxwell elements. Six of them were discarded during training, leaving only the reduce set of parameters plotted in Fig. 6. We provide the trained vCANN structure and the corresponding hyperparameters in Appendix E.2.
### Viscoelastic modeling of VHB 4910
Very-High-Bond (VHB) 4910 is a soft electro-active polymer (EAP) that exhibits nonlinear viscoelastic behavior and can undergo substantial deformations. VHB 4910 was experimentally studied by [100], using
Figure 5: vCANNs can learn to replicate (left) and predict (right) the viscoelastic behavior of abdominal muscle with high accuracy: experimental data from Fig. 4(b) of [37] is reproduced by solid circles; solid lines represent the fit of the vCANN.
Figure 6: Relaxation times (left) and coefficients (right) learned by the vCANN from the data set of leporine rectus abdominis muscle by [37]. Only four of the initial ten Maxwell elements remained after training.
uniaxial loading-unloading tests to characterize the rate-dependent behavior. The tests were conducted for three different stretch rates \(\dot{\lambda}=\{0.01,0.03,0.05\}\) s\({}^{-1}\) and four different stretch levels \(\lambda=\{1.5,2.0,2.5,3.0\}\) (Fig. F.13). Moreover, the authors conducted a multi-step relaxation test to determine the equilibrium response of the material. The constitutive model proposed in [100] is based on a multiplicative split of the deformation gradient into an elastic and viscous part. The hyperelastic eight-chain model of [101] was chosen to model the elastic part. The material parameters of the elastic part were identified using the data from a multi-step relaxation test. The strain energy function and evolution equation proposed by [102] were chosen for the viscous part. The viscous material parameters were identified using the loading-unloading data of \(\dot{\lambda}=0.01\) s\({}^{-1}\) and \(\dot{\lambda}=0.05\) s\({}^{-1}\) at a stretch level of \(\lambda=3\). The rheological analog model of the constitutive model was a generalized Maxwell model where the number of parallel branches was chosen to be four.
A few years later, VHB 4910 was again studied to demonstrate the abilities of a novel advanced microstructurally-informed constitutive model developed in [103]. The model relies on advanced knowledge of continuum and statistical mechanics and uses a multiplicative decomposition of the deformation gradient to represent a generalized Maxwell behavior. The elastic material parameters of the model were identified using time-consuming quasi-static tensile tests, and the viscous material parameters using (excluding, however, data with \(\lambda=2.5\)). The number of Maxwell elements was determined by hand and set to three.
By contrast, we only used loading-unloading data with \(\dot{\lambda}=0.01\) s\({}^{-1}\) and \(\dot{\lambda}=0.05\) s\({}^{-1}\) at the stretch levels \(\lambda=1.5\) and \(\lambda=3\) to train the vCANN. Figure 7 shows the training and validation results. The fit of both the training and validation is very accurate and at least on par with the one of the FNLV models used in [100] (Figs. 9-12), and [103] (Fig. 5(b)-(d)). However, we note that the vCANN automatically learned the number of Maxwell elements required to represent the material behavior well. We initialized the vCANN with 10 Maxwell. After training, only two remained, the viscous properties of which we provide in Fig. 8. Moreover, the application of the vCANN did not require advanced expert knowledge and did not require data from particularly sophisticated experiments. These advantages make vCANNs attractive from a practical point of view, in particular in the context of industrial applications. We list details on the trained vCANN structure and the corresponding hyperparameters in Appendix E.3.
Remark: The relatively large differences between the experimental data and the vCANN model for \(\lambda=2.5\) in Fig. 7(d), which can also be observed for the FNLV model in [100], are likely a result of experimental scatter. The loading paths should be almost identical for a fixed strain rate up to the respective maximum stretches. However, this is not the case, as is highlighted in Fig. F.13, which suggests considerable measurement errors in a part of the data, which naturally limited the ability of the vCANN to derive a consistent data-driven model.
Figure 7: Results of the trained vCANN for the polymer VHB 4910: achieved fitting of training data (a)–(b) and predictive performance on the validation data (c)–(d). Each subfigure shows the loading-unloading stress response for a fixed maximum stretch but different stretch rates. The scatter points represent experimental data on VHB 4910 reproduced from [100]; solid lines represent the trained vCANN.
### Blast load analysis of Polyvinyl Butyral
Polyvinyl Butyral (PVB) is a polymer whose primary application is laminated safety glasses. Under heat and pressure, two glass panes are bonded with an interlayer of PVB into a single unit. Under blast loads, the interlayer binds shards of glass, absorbs energy, and mitigates its transfer to the surrounding frame. It is essential to understand the mechanical behavior of PVB to improve the design of laminated glass structures. Large-scale simulations of these structures require simple material models that capture the mechanical behavior over a wide range of strain rates. [104] conducted high-stretch rate experiments on PVB, with stretch rates between 0.01 s\({}^{-1}\) and 400 s\({}^{-1}\). The viscous properties of PVC likely vary within such a wide range of strain rates. The significant change of the stress-stretch curve's shape above 0.2 s\({}^{-1}\) visible in Fig. 9 suggests this, too. Ideally, the constitutive model should be able to represent this transition accurately. [104], proposed an FNLV model and used the strain-dependent viscosity function by [47]. The model describes the experimental data well at high stretch rates, although it cannot accurately resolve the peak stress and subsequent softening at \(\lambda\approx 1.1-1.2\). At low strain rates, the fit quality is quantitatively unsatisfactory. In [104], the authors also fitted a standard generalized Maxwell model with constant relaxation coefficients and times for comparison. The model comprised six Maxwell elements whose relaxation times were chosen to be uniformly distributed on the logarithmic scale and kept fixed during the parameter identification of the relaxation coefficients. Notably, to account for the broad stretch rate range, two different models had to be used, one for the low stretch rate regime (up to 8 s\({}^{-1}\)) and the other for the high stretch rate regime (20 s\({}^{-1}\) and above). However, both models could not accurately describe the material behavior in their respective stretch rate regimes.
To account for the rate-dependent viscoelastic properties, we trained the vCANN detailed in Appendix E.4. Figure 9 shows that the vCANN successfully learned the constitutive behavior over a wide range of stretch rates. Comparing Figs. 12 to 17 in [104] with Fig. 9, reveals that the trained vCANN outperforms the traditional models. In particular, they capture the peak stress and softening in the initial loading phase up to \(\lambda\approx 1.2\). Importantly, the data-driven nature of our approach apparently provided the flexibility to model the transition between the low and high stretch rate regimes, whereas two different classical models were required to capture the two different regimes. The vCANN did not only learn the constitutive behavior of the training data but also made precise predictions in the low and high stretch regimes for unknown the unknown validation. Remarkably, no advanced expert knowledge was necessary to apply the vCANN, and training the vCANN from scratch took less than 10 minutes on a standard desktop computer. Since the traditional models in [104] were fitted using the entire data set, we also trained the vCANN on the entire data set to ensure a fair comparison.
Figure 8: Viscous properties of the vCANN learned from experimental data on VHB 4910 from [100]. Only two of the initial 10 Maxwell elements were kept and are necessary to describe the material accurately.
### Thermo-viscoelastic modeling of VHB 4905
Another commercially available EAP is VHB 4905. Like most polymers, VHB 4905 is strongly temperature sensitive. Hence [105] conducted an extensive experimental study with a wide range of temperatures at different stretch rates and stretch levels. To demonstrate the utility of the feature vector \(\mathbf{f}\) in the vCANN architecture (which is optional and was not yet used in the previous examples), we included the temperature \(\Theta\) into the vCANN input as \(\mathbf{f}\). For training we used data of loading-unloading tests with different temperatures \(\Theta=\{0,1020,40,60,80\}\) ["C] and stretch rates \(\dot{\lambda}=\{0.03,0.1\}\) s\({}^{-1}\) at the stretch level \(\lambda=4\). Additionally, we included in the training set data of tests with \(\Theta=\{0,40,60,80\}\) ["C], \(\dot{\lambda}=0.1\) s\({}^{-1}\) at \(\lambda=2\), Fig. 10. The strong nonlinear temperature dependence of the stress response is clearly visible by comparing Fig. 10(a) and Fig. 10(c). In particular, the shape of the stress-stretch curve as well as the stiffness changes between 0degC and 20 degC significantly.
To validate the trained can, we took data from loading-unloading tests with \(\Theta=\{0,1020,40,60,80\}\) ["C], \(\dot{\lambda}=0.03\) s\({}^{-1}\) at \(\lambda=3\). Moreover, we used data from tests with \(\Theta=\{0,1020,40,60,80\}\) ["C], \(\dot{\lambda}=0.05\) s\({}^{-1}\) at \(\lambda=4\) for validation, Figs. 11. Of note, the vCANN had not received any training data with a stretch level \(\lambda=3\) nor with a stretch rate \(\dot{\lambda}=0.05\) s\({}^{-1}\). Yet, the trained vCANN was able to predict very well the material behavior for the unknown stretch level and also for the unknown stretch rate. Both is challenging for classical constitutive models and demonstrates the potential of vCANNs. Details on the trained vCANN and its hyperparamters are documented in Appendix E.5. As seen in [105], different sophisticated load protocols are necessary for classical models to calibrate individual parts of the model separately. Although this procedure is possible with vCANNs due to their modularity, they can be trained on a large data set directly, which is much simpler, faster and requires no advanced expert knowledge.
Figure 9: High-stretch rate experiments on PVB. The vCANN accurately describes the constitutive behavior over a wide range of stretch rates for the training (left) and unknown validation data (right). The scatter points represent experimental data reproduced from [105]; solid lines represent the stress response of the vCANN trained on the complete data set.
Figure 10: Performance of the vCANN for VHB 4905: achieved fitting of training data. The scatter points represent experimental data reproduced from [105]; solid lines represent the vCANN performance
Figure 11: Performance of the vCANN for VHB 4905: predictive performance on validation data. The scatter points represent experimental data reproduced from [105]; solid lines represent the vCANN performance
## 5 Conclusion
In this paper, we introduced vCANNs, a physics-informed data-driven framework for anisotropic nonlinear viscoelasticity at finite strains. The viscous part is based on a generalized Maxwell model enhanced with nonlinear strain (rate)-dependent relaxation coefficients and times represented by neural networks. The number of Maxwell elements is not determined a priori but adapts automatically during training. Thereby, vCANNs employ \(L_{1}\) regularization on the Maxwell branches to promote a sparse model. In contrast, traditional models usually specify and fix the number of Maxwell branches before calibrating the material parameters, which requires additional, often labor-intense tests. vCANNs adopt the computationally very efficient framework of QLV and FLV but generalize these well-established theories to model anisotropic nonlinear viscoelasticity. We demonstrated the ability of vCANNs to learn even challenging viscoelastic behavior of advanced materials by several examples. We also briefly illustrated the ability of vCANNs to process non-mechanical information such as temperature data (or, in other cases also, microstructural or processing data) to predict the behavior of materials under conditions not covered by the training data. We demonstrated that vCANNs could learn the viscoelastic behavior of advanced materials from a database similarly small as the one human experts typically need to calibrate their models. However, vCANNs can learn the material behavior in a fast and fully-automated manner, and their application does not require any expert knowledge. These advantages make vCANNs a favorable tool to support the development of new advanced materials in academia and industry.
Of note, vCANNs are not only helpful from a practical perspective but can also promote our theoretical understanding. For example, it is often believed that the generalized Maxwell model with strain-dependent material parameters cannot describe strain-dependent relaxation curves accurately [53]. Interestingly, the application example on the rectus abdominis muscle presented above demonstrates that vCANNs are very well able to accomplish this. These findings raise the question of whether doubts about the capabilities of generalized Maxwell models are mainly a result of difficulties humans face in their proper calibration instead of fundamental shortcomings of this class of models. In such a way, vCANNs can help us with their automated and highly efficient calibration process to understand the actual capabilities and limits of generalized Maxwell models. Exploring this further may be an exciting avenue for future research.
## Acknowledgements
K. P. Abdolazizi and C. J. Cyron gratefully acknowledge financial support from TUHH within the I\({}^{3}\)-Lab 'Modellgestutztes maschinelles Lernen fur die Weichgewebsmodellierung in der Medizin'. We thank Guang Chen (Department of Mechanical Engineering, University of Connecticut) for sharing parts of his code with us, which is not used in the current version of vCANNs but which was helpful for us to develop ideas.
## Appendix A Transverse Isotropy
To illustrate the proposed constitutive model, we consider a transversely isotropic material. Transversely isotropic materials exhibit one preferred material direction. Material properties remain invariant with respect to rotations about and reflections from the planes orthogonal or parallel to this preferred direction. The preferred direction \(\mathbf{l}_{1}\) may be interpreted as the direction of a unidirectional family of fibers embedded into some isotropic matrix. We obtain from Eq. (7) the structural tensors
\[\mathbf{L}_{0}=\frac{1}{3}\mathbf{I}, \mathbf{L}_{1}=\mathbf{l}_{1}\otimes\mathbf{l}_{1}. \tag{10}\]
Setting \(\mathbf{L}_{r1}=\mathbf{L}_{1}\), Eq. (7) yields the generalized structural tensors
\[\tilde{\mathbf{L}}_{r}=\frac{1}{3}\left(1-w_{r1}\right)\mathbf{I}+w_{r1} \mathbf{L}_{1},\quad r=1,2,\ldots,R. \tag{11}\]
The generalized structural tensor Eq. (11) describes a transversely isotropic fiber dispersion with rotational symmetry around a mean fiber direction aligned with \(\mathbf{l}_{1}\)[106]. Unidirectional alignment requires the uncoupling of the two contributions \(\mathbf{I}\) and \(\mathbf{L}_{1}\). Hence, setting \(R=2\), \(w_{11}=w_{20}=0\), and \(w_{10}=w_{21}=1\) results in
\[\tilde{\mathbf{L}}_{1}=\frac{1}{3}\mathbf{I}, \tilde{\mathbf{L}}_{2}=\mathbf{L}_{1}. \tag{12}\]
With Eqs. (6) and (20), the generalized invariants are
\[\tilde{I}_{1}=\frac{1}{3}\operatorname{tr}\left(\mathbf{C}\right),\quad\quad \tilde{J}_{1}=\frac{1}{3}\operatorname{tr}\left(\operatorname{cof}\mathbf{C} \right),\quad\quad\tilde{I}_{2}=\operatorname{tr}\left(\mathbf{C}\mathbf{L}_{1} \right),\quad\quad\tilde{J}_{2}=\operatorname{tr}\left(\left(\operatorname{cof} \mathbf{C}\right)\mathbf{L}_{1}\right)\quad\text{ III}_{\mathbf{C}}=\det\mathbf{C}=1, \tag{100}\]
and
\[\tilde{I}_{1}=\frac{1}{3}\operatorname{tr}\left(\dot{\mathbf{C}}\right),\quad \tilde{J}_{1}=\frac{1}{3}\operatorname{tr}\left(\operatorname{cof}\dot{ \mathbf{C}}\right),\quad\tilde{I}_{2}=\operatorname{tr}\left(\dot{\mathbf{C} \mathbf{L}_{1}}\right),\quad\tilde{J}_{2}=\operatorname{tr}\left(\left( \operatorname{cof}\dot{\mathbf{C}}\right)\mathbf{L}_{1}\right),\quad\text{ III}_{\dot{\mathbf{C}}}=\det\dot{\mathbf{C}}, \tag{101}\]
such that
\[\tilde{\mathcal{I}}=\left\{\tilde{I}_{1},\tilde{J}_{1},\tilde{I}_{2},\tilde{J }_{2}\right\},\qquad\qquad\qquad\qquad\tilde{\mathcal{I}}=\left\{\tilde{I}_{1 },\tilde{J}_{1},\tilde{I}_{2},\tilde{J}_{2},\text{III}_{\mathbf{C}}\right\}, \qquad\qquad\qquad\qquad\mathcal{I}=\tilde{\mathcal{I}}\cup\tilde{\mathcal{I }}. \tag{102}\]
According to Eq. (12), the instantaneous elastic 2. Piola-Kirchhoff stress of a transversely isotropic material with unidirectional fiber alignment can be computed by differentiating the strain energy function
\[\Psi=\Psi\left(\mathcal{I},\mathbf{f}\right) \tag{103}\]
with respect to \(\mathbf{C}\), giving
\[\mathbf{S}^{e}=-p\mathbf{C}^{-1}+2\left(\frac{\partial\Psi}{\partial\tilde{I}_ {1}}\mathbf{I}-\frac{\partial\Psi}{\partial\tilde{J}_{1}}\mathbf{C}^{-2} \right)+2\left(\frac{\partial\Psi}{\partial\tilde{I}_{2}}\mathbf{L}_{1}-\frac {\partial\Psi}{\partial\tilde{J}_{2}}\mathbf{C}^{-1}\mathbf{L}_{1}\mathbf{C}^ {-1}\right). \tag{104}\]
The reduced relaxation functions Eq. (22) simplify to
\[G_{r}=G_{r}\left(t;\mathcal{I},\mathbf{f}\right),\quad r=1,2. \tag{105}\]
Within the proposed framework of anisotropic nonlinear viscoelasticity, Eqs. (104) and (105) constitute the most general expressions for the stress and reduced relaxation functions of a transversely isotropic material with unidirectional fiber alignment. For practical applications, it is often useful to uncouple \(\Psi\) and \(G_{r}\) with respect to the generalized structural tensors:
\[\Psi=\Psi_{1}(\tilde{I}_{1},\tilde{J}_{1},\mathbf{f})+\Psi_{2}(\tilde{I}_{2}, \tilde{J}_{2},\mathbf{f}), \tag{106}\]
\[G_{1}=G_{1}\left(t;\tilde{I}_{1},\tilde{J}_{1},\tilde{I}_{1},\tilde{J}_{1}, \text{III}_{\dot{\mathbf{C}}},\mathbf{f}\right),\qquad\qquad\qquad G_{2}=G_{2 }\left(t;\tilde{I}_{2},\tilde{J}_{2},\tilde{I}_{2},\tilde{J}_{2},\text{III}_{ \dot{\mathbf{C}}},\mathbf{f}\right). \tag{107}\]
Uncoupling can significantly accelerate the training process of vCANNs. We can identify \(\Psi_{1}\) and \(G_{1}\) with the isotropic strain energy function and reduced relaxation function, respectively. By contrast, \(\Psi_{2}\) and \(G_{2}\) represent the anisotropic strain energy function and reduced relaxation function. This example illustrates our proposed framework's versatility and that it includes important classes of anisotropy as special cases.
## Appendix B Numerical time integration
In this section, we provide the derivation of the numerical time-stepping scheme used within the vCANN framework. We are interested in computing the viscous overstresses (Eq. (28)
\[\mathbf{Q}_{r\alpha}=\int_{-\infty}^{t}g_{r\alpha}(\mathcal{I},\mathbf{f}) \exp\left(-\frac{t-s}{\tau_{r\alpha}(\mathcal{I},\mathbf{f})}\right)\dot{ \mathbf{S}}_{r}^{e}\operatorname{d}\!s. \tag{108}\]
To this end, we recall the evolution equation of a single Maxwell branch with a strain (rate)-dependent relaxation coefficient and time. The evolution of the viscous overstress \(\mathbf{Q}_{r\alpha}\) is governed by the linear ODE of first order with variable coefficients and with some known but otherwise arbitrary initial value \(\mathbf{Q}_{r\alpha}^{n}\) at an arbitrary time point \(t^{n}\),
\[\mathbf{\dot{Q}}_{r\alpha}+\frac{\mathbf{Q}_{r\alpha}}{\tau_{r\alpha}( \mathcal{I},\mathbf{f})}=g_{r\alpha}(\mathcal{I},\mathbf{f})\dot{\mathbf{S}}_ {r}^{e},\quad\mathbf{Q}_{r\alpha}^{n}=\mathbf{Q}_{r\alpha}(t^{n}). \tag{109}\]
This equation can be solved by a time-stepping scheme after discretizing time into a number of time points \(t^{i}\). Consider the small time interval \([t^{n},t^{n+1}]\) between time points \(t^{n}\) and \(t^{n+1}\) with time step size \(\Delta t=t^{n+1}-t^{n}\)
For sufficiently small intervals (and assuming a sufficiently smooth problem) \(\Delta t\), \(\tau_{r\alpha}\) and \(g_{r\alpha}\) can through the whole time interval be approximated by the average values of its beginning and end point:
\[\bar{\tau}_{r\alpha} =\frac{\left(\tau_{r\alpha}\right)^{n+1}+\left(\tau_{r\alpha} \right)^{n}}{2}, \bar{g}_{r\alpha} =\frac{\left(g_{r\alpha}\right)^{n+1}+\left(g_{r\alpha}\right)^{n }}{2}.\] (B.3)
In a displacement-driven setting, \(\bar{\tau}_{r\alpha}\) and \(\bar{g}_{r\alpha}\) are known since they depend on the prescribed deformation (rate) at the considered times. With the approximation Eq. (B.3), Eq. (B.2) becomes a linear ODE of first order with constant coefficients and with some known initial value:
\[\dot{\mathbf{Q}}_{r\alpha}+\frac{\mathbf{Q}_{r\alpha}}{\bar{\tau}_{r\alpha}}= \bar{g}_{r\alpha}\dot{\mathbf{S}}_{r}^{e},\quad\mathbf{Q}_{r\alpha}^{n}= \mathbf{Q}_{r\alpha}(t^{n}).\] (B.4)
Multiplying both sides of Eq. (B.4) by the integrating factor \(\exp(t/\bar{\tau}_{r\alpha})\) and applying the product rule gives
\[\frac{\mathrm{d}}{\mathrm{d}t}\left[\mathbf{Q}_{r\alpha}\exp\left(\frac{t}{ \bar{\tau}_{r\alpha}}\right)\right]=\bar{g}_{r\alpha}\dot{\mathbf{S}}_{r}^{e} \exp\left(\frac{t}{\bar{\tau}_{r\alpha}}\right).\] (B.5)
Integrating Eq. (B.5) from \(t^{n}\) to \(t^{n+1}\) yields
\[\exp\left(\frac{t^{n+1}}{\bar{\tau}_{r\alpha}}\right)\mathbf{Q}_{r\alpha}^{n +1}-\exp\left(\frac{t^{n}}{\bar{\tau}_{r\alpha}}\right)\mathbf{Q}_{r\alpha}^{ n}=\int_{t^{n}}^{t^{n+1}}\exp\left(\frac{t}{\bar{\tau}_{r\alpha}}\right) \bar{g}_{r\alpha}\dot{\mathbf{S}}_{r}^{e}\,\mathrm{d}t\] (B.6)
which can subsequently be solved for
\[\mathbf{Q}_{r\alpha}^{n+1} =\exp\left(\frac{t^{n}}{\bar{\tau}_{r\alpha}}\right)\exp\left(- \frac{t^{n+1}}{\bar{\tau}_{r\alpha}}\right)\mathbf{Q}_{r\alpha}^{n}+\int_{t^{ n}}^{t^{n+1}}\exp\left(\frac{t}{\bar{\tau}_{r\alpha}}\right)\exp\left(- \frac{t^{n+1}}{\bar{\tau}_{r\alpha}}\right)\bar{g}_{r\alpha}\dot{\mathbf{S}}_{ r}^{e}\,\mathrm{d}t\] (B.7) \[=\exp\left(-\frac{\Delta t}{\bar{\tau}_{r\alpha}}\right)\mathbf{Q }_{r\alpha}^{n}+\int_{t^{n}}^{t^{n+1}}\exp\left(-\frac{t^{n+1}-t}{\bar{\tau}_{ r\alpha}}\right)\bar{g}_{r\alpha}\dot{\mathbf{S}}_{r}^{e}\,\mathrm{d}t\] (B.8) \[\approx\exp\left(-\frac{\Delta t}{\bar{\tau}_{r\alpha}}\right) \mathbf{Q}_{r\alpha}^{n}+\exp\left(-\frac{\Delta t}{2\bar{\tau}_{r\alpha}} \right)\bar{g}_{r\alpha}\int_{t^{n}}^{t^{n+1}}\dot{\mathbf{S}}_{r}^{e}\, \mathrm{d}t\] (B.9) \[=\exp\left(-\frac{\Delta t}{\bar{\tau}_{r\alpha}}\right)\mathbf{Q }_{r\alpha}^{n}+\exp\left(-\frac{\Delta t}{2\bar{\tau}_{r\alpha}}\right)\bar{ g}_{r\alpha}\left[\left(\mathbf{S}_{r}^{e}\right)^{n+1}-\left(\mathbf{S}_{r}^{e} \right)^{n}\right]\] (B.10)
which yields a recurrence update formula for the viscous overstress at time \(t^{n+1}\), given we know the state at time \(t^{n}\). We used the mid-point rule on the integral in Eq. (B.8) approximating the time variable \(t\) by \((t^{n+1}-t^{n})/2\).
An alternative update formula for the overstress \(\mathbf{Q}_{r\alpha}^{n+1}\) can be obtained by approximating \(\dot{\mathbf{S}}_{r}^{e}\approx\frac{\left(\mathbf{S}_{r}^{e}\right)^{n+1}- \left(\mathbf{S}_{r}^{e}\right)^{n}}{\Delta t}\) directly in Eq. (B.8), which leads to
\[\mathbf{Q}_{r\alpha}^{n+1} =\exp\left(-\frac{\Delta t}{\bar{\tau}_{r\alpha}}\right)\mathbf{Q }_{r\alpha}^{n}+\bar{g}_{r\alpha}\frac{\left(\mathbf{S}_{r}^{e}\right)^{n+1}- \left(\mathbf{S}_{r}^{e}\right)^{n}}{\Delta t}\int_{t^{n}}^{t^{n+1}}\exp\left( -\frac{t^{n+1}-t}{\bar{\tau}_{r\alpha}}\right)\mathrm{d}t\] (B.11) \[=\exp\left(-\frac{\Delta t}{\bar{\tau}_{r\alpha}}\right)\mathbf{Q }_{r\alpha}^{n}+\frac{\bar{g}_{r\alpha}\bar{\tau}_{r\alpha}}{\Delta t}\left[1- \exp\left(-\frac{\Delta t}{\bar{\tau}_{r\alpha}}\right)\right]\left[\left( \mathbf{S}_{r}^{e}\right)^{n+1}-\left(\mathbf{S}_{r}^{e}\right)^{n}\right],\] (B.12)
giving an update formula for the overstress at time \(t^{n+1}\). The two update formulae (B.10) and (B.12) are very similar to common recurrence formulae for the stress update found in the literature [107, 31, 80]. The major difference to most formulae reported in the literature is that \(\bar{\tau}_{r\alpha}\) and \(\bar{g}_{r\alpha}\) are not constant but change each time step. In essence, in a discrete time stepping scheme, one solves a different QLV problem in each time step.
The algorithmic linearization of the stress tensor \(\mathbf{S}^{n+1}\) is essential for solving nonlinear boundary problems. With Eq. (28), we obtain an update rule for the stress tensor \(\mathbf{S}^{n+1}\) at time point \(t^{n+1}\):
\[\mathbf{S}^{n+1}=-\left(p\mathbf{C}^{-1}\right)^{n+1}+\sum_{r=1}^{R}\left( \left(\mathbf{S}_{r}^{\infty}\right)^{n+1}+\sum_{\alpha=1}^{N_{r}}\mathbf{Q}_{r \alpha}^{n+1}\right).\] (B.13)
## Appendix C Structure learning block
## Appendix D Material parameters for the Ogden and HGO model
## Appendix E Model training and hyperparameters
In the following we list the vCANNs trained in Sec. 4 together with their hyperparameters. The selection of activation functions is discussed in Sec. 3. For training the vCANNs, we used the mean squared error (MSE)
\begin{table}
\begin{tabular}{l c c c} \hline \hline \(\hat{\tau}_{a,1}^{\text{iso}}\) & \(\hat{\tau}_{b,1}^{\text{iso}}\) & \(\hat{g}_{a,1}^{\text{iso}}\) & \(\hat{g}_{b,1}^{\text{iso}}\) & \(\hat{\tau}_{a,2}^{\text{iso}}\) & \(\hat{g}_{a,2}^{\text{iso}}\) & \(\hat{g}_{b,2}^{\text{iso}}\) & \(\hat{\tau}_{a,2}^{\text{ani}}\) & \(\hat{\tau}_{b,2}^{\text{ani}}\) & \(\hat{g}_{a,2}^{\text{ani}}\) \\ \hline
20.0 & -7.0 & 0.4 & -2.8 & 1.0 & 4.0 & 0.1 & -2.8 & 10.0 & 0.7 & 0.8 & -1.1 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Isotropic and anisotropic viscoelastic material parameters used for synthetic data generation
\begin{table}
\begin{tabular}{l c c c} \hline \hline \(\mu_{1}\) & \(\alpha_{1}\) & \(k_{1}\) & \(k_{2}\) \\ \hline
0.3 & 3.7 & 0.3 & 0.4 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Isotropic and anisotropic elastic material parameters used for synthetic data generation
Figure 12: Schematic illustration of the structure learning block: The deformation (rate) tensors \(\mathbf{C}\), \(\hat{\mathbf{C}}\), and the feature vector \(\mathbf{f}\) serve as input to the structure learning block. The preferred material directions \(\mathbf{l}_{rj}\) and scalar weights \(w_{rj}\) are learnt from the feature vector \(\mathbf{f}\) within dedicated neural networks \(\mathcal{N}_{\mathbf{\mathrm{L}}_{r}}\). Inserting the outputs of \(\mathcal{N}_{\mathbf{\mathrm{L}}_{r}}\) in Eq. (7) yields the generalized structural tensors \(\mathbf{\mathrm{L}}_{r}\). Together with the deformation (rate) tensors \(\mathbf{C}\) and \(\mathbf{\hat{C}}\), we obtain from Eqs. (6) and (20) the generalized invariants \(\mathcal{I}\) and \(\hat{\mathcal{I}}\), respectively. In turn, the generalized invariants \(\mathcal{\tilde{I}}\) and \(\hat{\mathcal{\tilde{I}}}\) themselves serve as input to the main part of the vCANN in Fig. 2.
between the actual stress response and the one estimated by the vCANN as the loss function. The gradients of the loss with respect to model parameters are calculated by the backpropagation algorithm using automatic differentiation. The training was terminated based on early stopping. All weights and biases were initialized with Glorot/Xavier uniform initializer and zeros, respectively. No regularization (weight decay) was applied to the weights and biases during training. No dropout layers were used. All vCANNs were trained with Adam optimizer (\(\beta_{1}=0.9\), \(\beta_{2}=0.999\), \(\varepsilon=10^{-7}\)).
### Anisotropic viscoelasticity with synthetic data, Sec. 4.1
The material is transversely isotropic, exhibits strain-dependent but no strain rate-dependent viscous effects, and has no notable features (\(\mathbf{f}=\mathbf{0}\)). Thus, according to Eqs. (A.10) and (A.11), the vCANNs is given by
\[\Psi=\Psi_{1}(\tilde{I}_{1},\tilde{J}_{1})+\Psi_{2}(\tilde{I}_{2},\tilde{J}_{2 }),\qquad\qquad G_{1}=G_{1}\left(t;\tilde{I}_{1},\tilde{J}_{1}\right),\qquad \qquad G_{2}=G_{2}\left(t;\tilde{I}_{2},\tilde{J}_{2}\right).\] (E.1)
### Passive viscoelastic response of the abdominal muscle, Sec. 4.2
The material is isotropic, exhibits strain-dependent but no strain rate-dependent viscous effects, and has no notable features (\(\mathbf{f}=\mathbf{0}\)). Thus, according to Eqs. (A.10) and (A.11), the vCANNs is given by
\[\Psi=\Psi_{1}(\tilde{I}_{1},\tilde{J}_{1}),\qquad\qquad\qquad\qquad G_{1}=G_{1 }\left(t;\tilde{I}_{1},\tilde{J}_{1}\right).\] (E.2)
\begin{table}
\begin{tabular}{l l} \hline \hline Hyperparameter & Value \\ \hline _General_ & \\ Learning rate & 0.001 \\ Sparsity penalty parameter \(\Lambda\) & 0.001 \\ \hline _Instantaneous elastic stress (CANN)_ & \\ Convex & Yes \\ Number of neurons per hidden layer (\(\Psi_{1}\)) & \(\{32,32,32\}\) \\ Number of neurons per hidden layer (\(\Psi_{2}\)) & \(\{32,32,32\}\) \\ \hline _Reduced relaxation functions_ & \\ Maximal number of Maxwell elements \(N_{1}^{max}\) & 5 \\ Maximal number of Maxwell elements \(N_{2}^{max}\) & 5 \\ Time normalization \([T_{min},T_{max}]\) & \([10^{-2},10^{3}]\) s \\ Number of neurons per hidden layer of \(\mathcal{N}_{\tau_{1a}}\) & \(\{32,32,16\}\) \\ Number of neurons per hidden layer of \(\mathcal{N}_{g_{1a}}\) & \(\{32,32,16\}\) \\ Number of neurons per hidden layer of \(\mathcal{N}_{\tau_{2a}}\) & \(\{32,32,16\}\) \\ Number of neurons per hidden layer of \(\mathcal{N}_{g_{2a}}\) & \(\{32,32,16\}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Hyperparameters of the vCANN from Sec. 4.1
### Viscoelastic modeling of VHB 4910, Sec. 4.3
The material is isotropic, exhibits strain-dependent but no strain rate-dependent viscous effects, and has no notable features (\(\mathbf{f}=\mathbf{0}\)). Thus, according to Eqs. (A.10) and (A.11), the vCANNs is given by
\[\Psi=\Psi_{1}(\tilde{I}_{1},\tilde{J}_{1}), G_{1}=G_{1}\left(t;\tilde{I}_{1},\tilde{J}_{1}\right).\] (E.3)
### Blast load analysis of Polyvinyl Butyral, Sec. 4.4
The material is isotropic, exhibits strain-dependent and strain rate-dependent viscous effects, and has no notable features (\(\mathbf{f}=\mathbf{0}\)). Thus, according to Eqs. (A.10) and (A.11), the vCANNs is given by
\[\Psi=\Psi_{1}(\tilde{I}_{1},\tilde{J}_{1}), G_{1}=G_{1}\left(t;\tilde{I}_{1},\tilde{J}_{1},\tilde{J}_{1},\Pi_{ \mathbf{C}}\right).\] (E.4)
\begin{table}
\begin{tabular}{l l} \hline \hline Hyperparameter & Value \\ \hline _General_ & \\ Learning rate & 0.001 \\ Sparsity penalty parameter \(\Lambda\) & 1.0 \\ \hline _Instantaneous elastic stress (CANN)_ & \\ Convex & Yes \\ Number of neurons per hidden layer & \(\{8,8,6\}\) \\ \hline _Reduced relaxation functions_ & \\ Maximal number of Maxwell elements \(N_{1}^{max}\) & 10 \\ Time normalization \([T_{min},T_{max}]\) & \([10^{-2},10^{3}]\) s \\ Number of neurons per hidden layer of \(\mathcal{N}_{\tau_{1\alpha}}\) & \(\{16,16,8\}\) \\ Number of neurons per hidden layer of \(\mathcal{N}_{g_{1\alpha}}\) & \(\{16,16,8\}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Hyperparameters of the vCANN from Sec. 4.3
\begin{table}
\begin{tabular}{l l} \hline \hline Hyperparameter & Value \\ \hline _General_ & \\ Learning rate & 0.001 \\ Sparsity penalty parameter \(\Lambda\) & 0.0002 \\ \hline _Instantaneous elastic stress (CANN)_ & \\ Convex & Yes \\ Number of neurons per hidden layer of \(\mathcal{N}_{\tau_{1\alpha}}\) & \(\{32,32,16\}\) \\ Number of neurons per hidden layer of \(\mathcal{N}_{g_{1\alpha}}\) & \(\{32,32,16\}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Hyperparameters of the vCANN from Sec. 4.2
### Thermo-viscoelastic modeling of VHB 4905 data, Sec. 4.5
The material is isotropic, exhibits strain-dependent but no strain rate-dependent viscous effects, and its mechanical behavior is significantly temperature-dependent (\(\mathbf{f}=[\Theta]^{\mathrm{T}}\)). Thus, according to Eqs. (A.10) and (A.11), the vCANNs is given by
\[\Psi=\Psi_{1}(\tilde{I}_{1},\tilde{J}_{1},\Theta), G_{1}=G_{1}\left(t;\tilde{I}_{1},\tilde{J}_{1},\Theta\right).\] (E.5)
\begin{table}
\begin{tabular}{l l} \hline \hline Hyperparameter & Value \\ \hline _General_ & \\ Learning rate & 0.0005 \\ Sparsity penalty parameter \(\Lambda\) & 0.0001 \\ \hline _Instantaneous elastic stress (CANN)_ & \\ Convex & Yes \\ Number of neurons per hidden layer & \(\{32,32,16\}\) \\ \hline _Reduced relaxation functions_ & \\ Maximal number of Maxwell elements \(N_{1}^{max}\) & 10 \\ Time normalization \([T_{min},T_{max}]\) & \([10^{-2},10^{3}]\) s \\ Number of neurons per hidden layer of \(\mathcal{N}_{r_{1\alpha}}\) & \(\{32,32,16\}\) \\ \hline \hline \end{tabular}
\end{table}
Table 7: Hyperparameters of the vCANN from Sec. 4.5
\begin{table}
\begin{tabular}{l l} \hline \hline Hyperparameter & Value \\ \hline _General_ & \\ Learning rate & 0.0014 \\ Sparsity penalty parameter \(\Lambda\) & 0.025 \\ \hline _Instantaneous elastic stress (CANN)_ & \\ Convex & Yes \\ Number of neurons per hidden layer & \(\{32,32,16\}\) \\ \hline _Reduced relaxation functions_ & \\ Maximal number of Maxwell elements \(N_{1}^{max}\) & 10 \\ Time normalization \([T_{min},T_{max}]\) & \([10^{-2},10^{3}]\) s \\ Number of neurons per hidden layer of \(\mathcal{N}_{r_{1\alpha}}\) & \(\{32,32,16\}\) \\ \hline \hline \end{tabular}
\end{table}
Table 8: Hyperparameters of the vCANN from Sec. 4.4
## Appendix F VHB 4910 Data
|
2303.16464 | Lipschitzness Effect of a Loss Function on Generalization Performance of
Deep Neural Networks Trained by Adam and AdamW Optimizers | The generalization performance of deep neural networks with regard to the
optimization algorithm is one of the major concerns in machine learning. This
performance can be affected by various factors. In this paper, we theoretically
prove that the Lipschitz constant of a loss function is an important factor to
diminish the generalization error of the output model obtained by Adam or
AdamW. The results can be used as a guideline for choosing the loss function
when the optimization algorithm is Adam or AdamW. In addition, to evaluate the
theoretical bound in a practical setting, we choose the human age estimation
problem in computer vision. For assessing the generalization better, the
training and test datasets are drawn from different distributions. Our
experimental evaluation shows that the loss function with a lower Lipschitz
constant and maximum value improves the generalization of the model trained by
Adam or AdamW. | Mohammad Lashkari, Amin Gheibi | 2023-03-29T05:33:53Z | http://arxiv.org/abs/2303.16464v3 | Lipschitzness Effect of a Loss Function on Generalization Performance of Deep Neural Networks Trained by Adam and AdamW Optimizers
###### Abstract
The generalization performance of deep neural networks with regard to the optimization algorithm is one of the major concerns in machine learning. This performance can be affected by various factors. In this paper, we theoretically prove that the Lipschitz constant of a loss function is an important factor to diminish the generalization error of the output model obtained by Adam or AdamW. The results can be used as a guideline for choosing the loss function when the optimization algorithm is Adam or AdamW.
In addition, to evaluate the theoretical bound in a practical setting, we choose the human age estimation problem in computer vision. For assessing the generalization better, the training and test datasets are drawn from different distributions. Our experimental evaluation shows that the loss function with lower Lipschitz constant and maximum value improves the generalization of the model trained by Adam or AdamW.
+
Footnote †: journal: Amirkabir University of Technology (Tehran Polytechnic)
+
Footnote †: journal: Amirkabir University of Technology (Tehran Polytechnic)
+
Footnote †: journal: Amirkabir University of Technology (Tehran Polytechnic)
## 1 Introduction
The adaptive moment estimation (Adam) algorithm is one of the most widely used optimizers for training deep learning models. Adam is an efficient algorithm for stochastic optimization, based on adaptive estimates of first-order and second-order moments of gradient [1]. The method is computationally efficient and has little memory usage. Adam is much more stable than stochastic gradient descent (SGD) and the experiments of work [1] show that it is faster than previous stabilized versions of SGD, such as SGDNesterov [2], RMSProp [3] and AdaGrad [4] to minimize the loss function in the training phase. It is recently used in several machine learning problems and performs well. Thus, any improvement in the generalization performance of a model trained by Adam is essential.
One of the main concerns in machine learning is the generalization performance of deep neural networks (DNNs). A generalization measurement criterion is the generalization error which is defined as the difference between the true risk and the empirical risk of the output model [5]. One established way to address the generalization error of machine learning models in order to derive an upper bound for it, is the notion of uniform stability [5; 6; 7]. Roughly speaking, the uniform stability measures the difference in the error of the output model caused by a slight change in the training set. The pioneering work of [6], shows that if a deterministic learning algorithm is more stable, then the generalization error of the ultimate model achieves a tighter upper bound. In the following work of [7], Hardt _et al._ extend the notion of uniform stability to randomized learning algorithms to drive an upper bound for the expected generalization error of a DNN trained by SGD. They prove that SGD is more stable, provided that the number of iterations is sufficiently small. In the recent work of [5], Ali Akbari _et al._ derive a high probability generalization error bound instead of an expected generalization error bound. They demonstrate that if SGD is more uniformly stable, then the generalization error bound is tighter. They also proved the direct relationship between the uniform stability of SGD and loss function properties i.e. its Lipschitzness, resulting in the generalization error connection to the Lipschitz constant of a loss function.
In our work, Adam is central instead of SGD. We distinguish the relationship between the uniform stability of Adam and Lipschitzness of a loss function. Through this way, we connect the generalization error of a DNN trained by Adam to the properties of a loss function including its Lipschitzness. Subsequently, we assess the generalization performance of a DNN trained by AdamW optimizer which decouples weight decay from estimates of moments to make the regularization technique more effective [8]. We connect the uniform stability of AdamW and the
generalization error of a DNN trained by it to Lipschitzness of a loss function. In Experiments, we evaluate our theoretical results in the human age estimation problem.
Human age estimation is one of the most significant topics in a wide variety of applications such as age-specific advertising, customer profiling, or recommending new things. However, we are facing many challenges to solve this problem. Face makeup, insufficient light, skin color, and unique features of each person are the factors that can affect the accuracy of the model. Based on these reasons, collecting more data cannot necessarily reduce the generalization error of the final model. The practical results show that choosing a stable loss function can improve the accuracy of the model trained by Adam or AdamW.
## 2 Related Work
There is a variety of approaches to derive upper bounds for the generalization error including algorithmic stability [5; 6; 7; 9; 10; 11], robustness [12; 13], and PAC-Bayesian Theory [14; 15]. Each of these approaches theoretically analyzes some effective factors and gives the researchers some information which can enhance the generalization performance of deep learning models.
The notion of uniform stability was firstly introduced in [6] for deterministic algorithms. It was extended to randomized algorithms in the work of [7] to derive an expected upper bound for the generalization error which is directly related to the number of training epochs of SGD.
Recently, based on the uniform stability definition of SGD, the generalization error of a DNN trained by it, with high probability, is upper-bounded by a vanishing function which is directly related to the Lipschitz constant and the maximum value of a loss function [5]. In our work, we analyze the uniform stability for Adam and AdamW and its relationship with the Lipschitz constant of a loss function. We show that the loss function proposed in [5] stabilizes the training process and reduces the generalization error when the optimization algorithm is Adam or AdamW.
## 3 Preliminaries
Let \(X\) and \(Y\subseteq\mathbb{R}^{\mathbb{M}}\) be the input and output spaces of a problem respectively, and \(F\) be the set of all mappings from \(X\) to \(Y\). A learning problem is to find \(f^{\theta}:X\to Y\), parameterized by \(\theta\in H\) where \(H\) is the set of all possible values for the neural network parameters. Assume \(\ell:Y\times Y\rightarrow\mathbb{R}^{+}\) denotes the loss function of the problem. The goal of a learning algorithm is to minimize the true risk \(R_{true}(f^{\theta})\coloneqq\mathbb{E}_{\mathrm{(x,y)}}\left[\ell(f^{\theta }(\mathrm{x}),y)\right]\) where \(\mathrm{(x,y)}\in X\times Y\):
\[f^{\theta}_{true}=\operatorname*{argmin}_{f^{\theta}\in F}R_{true}(f^{\theta}). \tag{1}\]
Since the distribution of \(X\times Y\) is unknown; \(f^{\theta}_{true}\) cannot be found in the equation (1). Hence, we have to estimate the true risk. Let \(S\in(X\times Y)^{N}\) be the training set. The true risk is estimated by the empirical risk \(R_{emp}(f^{\theta})\coloneqq\frac{1}{N}\sum_{i=1}^{N}\ell(f^{\theta}(\mathrm{ x_{i}}),\mathrm{y_{i}})\) in which, \(N=|S|\) and \(\mathrm{(x_{i},y_{i})}\in S\). In current deep learning algorithms, training the model means minimizing \(R_{emp}(f^{\theta})\). In the rest of this paper, in the theorems and proofs, the loss function is denoted by \(\ell(\mathrm{y},\mathrm{y})\) where \(\mathrm{\hat{y}}\) is the prediction vector and \(\mathrm{y}\) is the target vector.
**Definition 3.1** (Partition): _Suppose that \(S\) is a training set of size \(N\). Let \(1<k<N\) be a number that \(N\) is divisible by \(k\) (if it is not possible, we repeat a sample enough to make divisibility possible). A partition of \(S\), which we denote by \(B_{S}=\{B_{1},B_{2},\ldots,B_{k}\}\), is a set of \(k\) subsets of \(S\) such that every sample is in exactly one set and the size of each subset is \(\frac{N}{k}\)._
We use the definition 3.1 to formalize the training process of deep learning models mathematically. Assume \(S\) is the training set and \(B_{S}=\{B_{1},B_{2},\ldots,B_{k}\}\) is a partition of it. Each element of \(B_{S}\) represents a mini-batch of \(S\). Without loss of generality we suppose that in each iteration of the optimization algorithm, a mini-batch \(B_{i}\in B_{S}\) is randomly selected to the parameters be updated. This is done by the algorithm using a random sequence \(R=(r_{1},r_{2},\ldots,r_{T})\) of indices of elements in \(B_{S}\), where \(T\) is the number of iterations. We use \(f^{\theta}_{B_{S},R}\) to denote the output model of the optimization algorithm, applied to a partition \(B_{S}\) and a random sequence \(R\).
**Definition 3.2** (Generalization Error): _Given a partition \(B_{S}\) of a training set \(S\) and a sequence \(R\) of random indices of \(B_{S}\) elements, the generalization error of \(f^{\theta}_{B_{S},R}\) trained by an arbitrary optimization algorithm, is defined as \(E(f^{\theta}_{B_{S},R})=R_{true}(f^{\theta}_{B_{S},R})-R_{emp}(f^{\theta}_{B_ {S},R})\)._
**Definition 3.3** (Lipschitzness): _Let \(Y\subseteq\mathbb{R}^{\mathrm{M}}\) be the output space of a problem. A loss function \(\ell(\hat{\mathrm{y}},\mathrm{y})\) is \(\gamma\)-Lipschitz with regard to its first argument, if \(\forall\,\mathrm{y}_{1},\mathrm{y}_{2}\in Y\), we have:_
\[\left|\ell(\mathrm{y}_{1},\mathrm{y})-\ell(\mathrm{y}_{2},\mathrm{y})\right| \leq\gamma\left\|\mathrm{y}_{1}-\mathrm{y}_{2}\right\|,\]
_where \(\left\|.\right\|\) is the \(L_{2}\) norm._
As mentioned before, uniform stability of the optimization algorithm is effective to the generalization performance of the ultimate model \(f^{\theta}_{B_{\mathrm{S}},R}\)[5]. We follow the uniform stability definition of work [7] to link Lipschitzness of the loss function to the generalization error of \(f^{\theta}_{B_{\mathrm{S}},R}\). For simplicity, moving forward, we denote \(f^{\theta}_{B_{\mathrm{S}},R}\) by \(f_{B_{\mathrm{S}},R}\) and \(E(f^{\theta}_{B_{\mathrm{S}},R})\) by \(E(f_{B_{\mathrm{S}},R})\).
Along with the notion of uniform stability which we define in Section 5, another concept called bounded difference condition (BDC) affects the generalization error [5]:
**Definition 3.4** (Bdc): _Consider two numbers \(k,T\in\mathbb{N}\). If \(G:\{1,2,\ldots,k\}^{T}\rightarrow\mathbb{R}^{+}\), is a measurable function and for \(R,R^{\prime}\in Dom(G)\) which are different only in two elements, constant \(\rho\) exists such that_
\[\sup_{R,R^{\prime}}\left|G(R^{\prime})-G(R)\right|\leq\rho,\]
_then, \(G(.)\) holds bounded difference condition (BDC) with the constant \(\rho\). We use the \(\rho\)-BDC expression to denote that a function holds this condition with the constant \(\rho\)._
In Definition 3.4, we assumed the slight change in the input to be the difference in two elements, which we will see its reason in the proof of the theorems. Intuitively, if a function satisfies the above condition, its value does not differ much due to a slight change in the input. Such functions are dense around their expectation with respect to the input random sequence \(R\)[16].
## 4 Formulation of Age Estimation Problem
Our problem in the experimental part is human age estimation. Let \((\mathrm{x},y)\) be a training sample where \(\mathrm{x}\) is the input image of a person's face and \(y\in\mathbb{N}\) is the corresponding age label. Due to the correlation of the neighboring ages, classification methods based on single-label learning [17] are not efficient because these methods ignore this correlation. Also, regression-based models are not stable to solve this problem [5].
According to the aforementioned reasons, another method based on label distribution learning (LDL) framework which was firstly introduced in the work of [18], is used for this problem [5]. In this method \(y\) is replaced by \(\mathrm{y}=[y_{1},y_{2},\ldots,y_{\mathrm{M}}]\in\mathbb{R}^{\mathrm{M}}\) where \(y_{i}\) is the probability of facial image \(\mathrm{x}\) belongs to class \(i\). As usual, \(\mathrm{y}\) is assumed to be a normal distribution, centering at \(y\) and standard deviation \(\sigma\) which controls the spread of the distribution [18]. Therefore, the output space, \(Y\) is a subset of \(\mathbb{R}^{\mathrm{M}}\) and our objective is to find \(f^{\theta}\) which maps \(\mathrm{x}\) to \(\mathrm{y}\in Y\).
### Loss Functions for Age Estimation Problem
Let \((\mathrm{x},\mathrm{y})\in S\) be a training instance where \(\mathrm{x}\) represents the facial image and \(\mathrm{y}\in\mathbb{R}^{\mathrm{M}}\) is the corresponding label distribution. Consider \(\hat{\mathrm{y}}=f^{\theta}(\mathrm{x})\), representing the estimated label distribution by \(f^{\theta}\). To obtain \(f^{\theta}\), a convex loss function named Kullbeck-Leibler (KL) divergence has been widely utilized. The KL loss function is defined as below:
\[\ell_{KL}(\hat{\mathrm{y}},\mathrm{y})=\sum_{m=1}^{\mathrm{M}}y_{m}\log(\frac{ y_{m}}{\hat{y}_{m}}).\]
As an alternative to KL, another convex loss function called Generalized Jeffries-Matusita (GJM) distance has been proposed in [5] under the LDL framework, defined as
\[\ell_{GJM}(\hat{\mathrm{y}},\mathrm{y})=\sum_{m=1}^{\mathrm{M}}y_{m}\left|1- \left(\frac{\hat{y}_{m}}{y_{m}}\right)^{\alpha}\right|^{\frac{1}{2}},\]
where \(\alpha\in(0,1]\). According to the experiments of [5], the best value of \(\alpha\) for good generalization is \(0.5\). It has been proved that if \(\alpha=0.5\), then the Lipschitz constant and the maximum value of GJM are less than the Lipschitz constant and the maximum value of KL respectively 1[5].
## 5 Uniform Stability and Generalization Error Analysis
The notion of uniform stability was firstly introduced in [6] for deterministic learning algorithms. They demonstrate that smaller stability measure of the learning algorithm, the tighter generalization error is. However, their stability measure is limited to deterministic algorithms and is not appropriate for randomized learning algorithms such as Adam. Therefore, we follow [5, 7] to define the uniform stability measure for randomized optimization algorithms generally:
**Definition 5.1** (Uniform Stability): _Let \(S\) and \(S^{\prime}\) denote two training sets drawn from a distribution \(\mathbb{P}\). Suppose that \(B_{S}\) and \(B_{S^{\prime}}\) of equal size k, are two partitions of \(S\) and \(S^{\prime}\) respectively, which are different in only one element (mini-batch). Consider a random sequence \(R\) of \(\{1,2,\ldots k\}\) to select a mini-batch at each iteration of an optimization algorithm, \(A_{opt}\). If \(f_{B_{S},R}\) and \(f_{B_{S^{\prime}},R}\) are output models obtained by \(A_{opt}\) with the same initialization, then \(A_{opt}\) is \(\beta\)-uniformly stable with regard to a loss function \(\ell\), if_
\[\forall S,S^{\prime}\ \ \sup_{(\mathrm{x},\mathrm{y})}\mathbb{E}_{R}\left[| \ell(f_{B_{S^{\prime}},R}(\mathrm{x}),\mathrm{y})-\ell(f_{B_{S},R}(\mathrm{x} ),\mathrm{y})|\right]\leq\beta.\]
To evaluate the uniform stability of Adam and AdamW in order to prove its link to loss function properties, a lemma named **Growth recursion** which has been stated in [7] for SGD is central to our analysis. In the following, we state this lemma for an arbitrary iterative optimization algorithm, but before stating the lemma, we need some definitions. As we know, gradient-based optimization algorithms are iterative, and in each iteration, the network parameters are updated. Consider \(H\) as the set of all possible values for the network parameters. Let \(A_{opt}\) be an arbitrary iterative optimization algorithm that runs \(T\) iterations. In the \(t\)-th iteration, the update that is computed in the last command of the loop for the network parameters, is a function \(A^{t}:H\to H\) mapping \(\theta_{t-1}\) to \(\theta_{t}\) for each \(1\leq t\leq T\). We call \(A^{t}\) the **update rule** of \(A_{opt}\). Let's define two characteristics of an update rule: The update rule, \(A^{t}(.)\) is \(\sigma\)**-bounded** if
\[\sup_{\theta\in H}\left\|\theta-A^{t}(\theta)\right\|\leq\sigma, \tag{2}\]
and it is \(\tau\)**-expensive** if
\[\sup_{\theta,\,\theta^{\prime}\in H}\frac{\left\|A^{t}(\theta)-A^{t}(\theta^{ \prime})\right\|}{\left\|\theta-\theta^{\prime}\right\|}\leq\tau, \tag{3}\]
where \(\left\|.\right\|\) is the \(L_{2}\) norm.
**Lemma 5.2** (Growth recursion): _[_7_]_ _Given two training set \(S\) and \(S^{\prime}\), suppose that \(\theta_{0},\theta_{1},\ldots,\theta_{T}\) and \(\theta^{\prime}_{0},\theta^{\prime}_{1},\ldots\theta^{\prime}_{T}\) are two updates of network parameters with update rules \(A^{t}_{S}\) and \(A^{t}_{S^{\prime}}\), running on \(S\) and \(S^{\prime}\) respectively such that for each \(1\leq t\leq T\), \(\theta_{t}=A^{t}_{S}(\theta_{t-1})\) and \(\theta^{\prime}_{t}=A^{t}_{S^{\prime}}(\theta^{\prime}_{t-1})\). If \(A^{t}_{S}\) and \(A^{t}_{S^{\prime}}\) are both \(\tau\)-expensive and \(\sigma\)-bounded, then for \(\Delta_{t}=\left\|\theta_{t}-\theta^{\prime}_{t}\right\|\), we have:_
* _If_ \(A^{t}_{S}=A^{t}_{S^{\prime}}\) _then_ \(\Delta_{t}\leq\tau\Delta_{t-1}\)_._
* _If_ \(A^{t}_{S}\neq A^{t}_{S^{\prime}}\) _then_ \(\Delta_{t}\leq\Delta_{t-1}+2\sigma\)__2_._ Footnote 2: In the work of [7], this inequality has been written as \(\Delta_{t}\leq\min(1,\tau)\Delta_{t-1}+2\sigma\) which is less than \(\Delta_{t}+2\sigma\) that we just need in the proofs of the theorems.
We state the proof of Lemma 5.2 in Appendix A. In Subsection 5.1, we discuss the uniform stability of Adam to upper-bound the generalization error of a DNN trained by it. Subsequently, In Subsection 5.2, we state different theorems for the uniform stability of AdamW and the generalization error because AdamW exploits decoupled weight decay, and its update parameters statement is different from Adam.
### Adam Optimizer
Let \(\ell(f^{\theta};B)\) represents the computation of a loss function on an arbitrary mini-batch, \(B=\{(\mathrm{x}_{i},\mathrm{y}_{i})\}_{i=1}^{b}\), which we use at each iteration to update parameters in order to minimize \(R_{emp}(f^{\theta})\):
\[\ell(f^{\theta};B)=\frac{1}{b}\sum_{i=1}^{b}\ell(f^{\theta}(\mathrm{x}_{i}), \mathrm{y}_{i}),\]
in which \(\theta\) is the parameters and \(b\) is the batch size. Let \(g(\theta)=\nabla_{\theta}\ell(f^{\theta};B)\) where \(\nabla_{\theta}\) is the gradient. For \(t\geq 1\) suppose that \(m_{t}\), \(v_{t}\) are estimates of the first and second moments respectively:
\[m_{t} =\beta_{1}\cdot m_{t-1}+(1-\beta_{1})\cdot g(\theta_{t-1});\ m_{0 }=0, \tag{4}\] \[v_{t} =\beta_{2}\cdot v_{t-1}+(1-\beta_{2})\cdot g^{2}(\theta_{t-1});\ v _{0}=0, \tag{5}\]
where \(\beta_{1},\beta_{2}\in(0,1)\) are exponential decay rates and the multiply operation is element-wise. Let \(\widehat{m}_{t}=m_{t}/(1-\beta_{1}^{t})\) and \(\widehat{v}_{t}=v_{t}/(1-\beta_{2}^{t})\) be the bias-corrected estimates; Adam computes the parameters update using \(\widehat{m}_{t}\) adapted by \(\widehat{v}_{t}\):
\[\theta_{t}=\theta_{t-1}-\eta\cdot\frac{\widehat{m}_{t}}{(\sqrt{\widehat{v}_{t}} +\epsilon)},\]
where \(\eta\) is the learning rate and \(\epsilon=10^{-8}\). Based on what we discussed so far, to evaluate the uniform stability of Adam, we need to formulate its update rule. Given \(\beta_{1},\beta_{2}\in(0,1)\) for each \(1\leq t\leq T\) let
\[\hat{M}(m_{t-1},\theta) =\frac{\beta_{1}\cdot m_{t-1}+(1-\beta_{1})\cdot g(\theta)}{1- \beta_{1}^{t}}, \tag{6}\] \[\hat{V}(v_{t-1},\theta) =\frac{\beta_{2}\cdot v_{t-1}+(1-\beta_{2})\cdot g^{2}(\theta)}{ 1-\beta_{2}^{t}}, \tag{7}\]
where \(m_{t-1}\) and \(v_{t-1}\) are the biased estimates for the first and second moments of the gradient at the previous step respectively as we explained in the equations (4) and (5). Adam's update rule is obtained as follows:
\[A^{t}(\theta)=\theta-\eta\cdot\left(\frac{\hat{M}(m_{t-1},\theta)}{\sqrt{\hat {V}(v_{t-1},\theta)}+\epsilon}\right), \tag{8}\]
where \(\eta\) is the learning rate and the division operation is element-wise. We use the following lemma in the proof of Theorem 5.4:
**Lemma 5.3**.: _Let \(m_{t-1}=\beta_{1}\cdot m_{t-2}+(1-\beta_{1})\cdot g(\theta_{t-2})\) such that \(\beta_{1}\in(0,1)\) is constant and \(m_{0}=0\). Let \(\ell(\mathrm{y},\mathrm{y})\) be \(\gamma\)-Lipschitz. Then for all \(t\geq 1\) and \(\theta\in H\), we have \(\left\|\hat{M}(m_{t-1},\theta)\right\|\leq\gamma\)._
The proof of Lemma 5.3 is available in Appendix A. Now we can state the theorems which link the generalization error with the loss function properties. In Theorem 5.4 we assess the stability measures including the uniform stability and in Theorem 5.5, we drive an upper bound for the generalization error of a DNN trained by Adam.
**Theorem 5.4**.: _Assume Adam is executed for \(T\) iterations with a learning rate \(\eta\) and batch size \(b\) to minimize the empirical risk in order to obtain \(f_{B_{g},R}\). Let \(\ell(\mathrm{y},\mathrm{y})\) be convex and \(\gamma\)-Lipschitz. Then, Adam is \(\beta\)-uniformly stable with regard to the loss function \(\ell\), and for each \((\mathrm{x},\mathrm{y})\), \(\ell(f_{B_{g},R}(\mathrm{x}),\mathrm{y})\) holds the \(\rho\)-BDC with respect to \(R\). Consequently, we have_
\[\beta\leq\frac{2\eta}{c}\cdot\frac{bT\gamma^{2}}{N},\quad\rho\leq\frac{8\eta}{ c}\cdot\left(\frac{b\gamma}{N}\right)^{2},\]
_in which \(c\in(0,1)\) is a constant number and \(N\) is the size of the training set._
**Proof.** Consider Adam's update rule, \(A^{t}(.)\) in the equation (8). In order to prove that \(A^{t}(.)\) satisfies the conditions of Lemma 5.2, \(\sigma\)-boundedness and \(\tau\)-expensiveness of \(A^{t}(.)\) are needed to be evaluated. From the formula (2), we have:
\[\left\|\theta-A^{t}(\theta)\right\|=\left\|\eta\cdot\left(\frac{\hat{M}(m_{t- 1},\theta)}{\sqrt{\hat{V}(v_{t-1},\theta)}+\epsilon}\right)\right\|\]
where \(m_{t-1}\) and \(v_{t-1}\) are the biased estimates for \(\mathbb{E}\left[g\right]\) and \(\mathbb{E}\left[g^{2}\right]\geq 0\) in the \(t\)-th step respectively. Therefore:
\[\left\|\eta\cdot\left(\frac{\hat{M}(m_{t-1},\theta)}{\sqrt{\hat {V}(v_{t-1},\theta)}+\epsilon}\right)\right\| \leq\eta\cdot\left\|\frac{\hat{M}(m_{t-1},\theta)}{\epsilon}\right\| \tag{9}\] \[\leq\frac{\eta\gamma}{\epsilon}. \tag{10}\]
Because \(\epsilon>0\) and \(\hat{V}(v_{t-1},\theta)\geq 0\), we deduced the inequality (9). In the inequality (10), Lemma 5.3 has been applied, which implies that, \(A^{t}(.)\)\(\sigma\)-bounded such that \(\sigma\leq\frac{\eta\gamma}{\epsilon}\). Now, we check the \(\tau\)-expensiveness condition: we know that for all \(\theta\in H\), \(\frac{\hat{M}(m_{t-1},\theta)}{\sqrt{\hat{V}(v_{t-1},\theta)}}\simeq\pm 1\) because \(|\mathbb{E}[g]|/\sqrt{\mathbb{E}[g^{2}]}\leq 1\). On the other hand \(\ell(\mathrm{y},\mathrm{y})\) is convex. Thus, for two updates of network parameters \(\theta_{t-1}\) and \(\theta_{t-1}^{\prime}\) in an arbitrary iteration \(t\) with the same initialization,
by choosing a sufficiently small learning rate, the two vectors \(\frac{\hat{M}(m_{c-1},\theta_{t-1})}{\sqrt{V(v_{t-1},\theta_{t-1})}}\) and \(\frac{\hat{M}(m_{c-1},\theta_{t-1}^{\prime})}{\sqrt{V(v_{t-1},\theta_{t-1}^{ \prime})}}\) are approximately equal. Thus, by substituting \(A^{t}(.)\) in the formula (3), it is concluded that, \(A^{t}(.)\) is \(1\)-expensive.
Let \(B_{S}\) and \(B_{S^{\prime}}\) having equal size \(k\), be two partitions of training sets \(S\) and \(S^{\prime}\) respectively, such that \(B_{S}\) and \(B_{S^{\prime}}\) are different in only one mini-batch. Let \(\theta_{0},\theta_{1},\ldots,\theta_{T}\) and \(\theta_{0}^{\prime},\theta_{1}^{\prime},\ldots\theta_{T}^{\prime}\) be two parameters updates obtained from training the network by Adam with update rules \(A_{S}^{t}\) and \(A_{S^{\prime}}^{t}\) respectively where \(A_{S}^{t}\) runs on \(B_{S}\) and \(A_{S^{\prime}}^{t}\) runs on \(B_{S^{\prime}}\) with the same random sequence \(R\) such that \(\theta_{0}=\theta_{0}^{\prime}\). Let two mini-batches \(B\) and \(B^{\prime}\) have been selected for updating the parameters in the \(t\)-th iteration. If \(B=B^{\prime}\), then \(A_{S^{\prime}}^{t}=A_{S}^{t}\) else \(A_{S^{\prime}}^{t}\neq A_{S}^{t}\). \(B=B^{\prime}\) occurs with probability \(1-\frac{1}{k}\) and the opposite occurs with probability \(\frac{1}{k}\). At the beginning of the proof, we demonstrated that \(A^{t}(.)\) (for an arbitrary training set) is \(\sigma\)-bounded and \(1\)-expensive. Let \(\Delta_{t}=\|\theta_{t}-\theta_{t}^{\prime}\|\), from Lemma 5.2, we have:
\[\Delta_{t} \leq(1-\frac{1}{k})\Delta_{t-1}+\frac{1}{k}\left(\Delta_{t-1}+ \frac{2\eta\gamma}{\epsilon}\right)\] \[=\Delta_{t-1}+\frac{1}{k}\cdot\frac{2\eta\gamma}{\epsilon}.\]
We know \(k=\frac{N}{b}\). Therefore, solving the recursive relation gives
\[\Delta_{T}\leq\Delta_{0}+2T\eta\cdot\frac{\gamma}{k\epsilon}=2\eta\cdot\frac {bT\gamma}{N\epsilon}.\]
Let \(\theta_{T,i}\) are the effective parameters of \(\theta_{T}\) on the \(i\)-th neuron of the last layer with \(M\) neurons. notation \(\langle.,.\rangle\) is inner product and \(\left[f(i)\right]_{i=1}^{M}\) for an arbitrary function \(f\), denotes the vector \(\left[f(1),f(2),\ldots,f(M)\right]\). Now we proceed to prove Adam's uniform stability. According to Definition 5.1, we have:
\[\mathbb{E}_{R}\left(\left|\ell(f_{B_{S^{\prime}},R}(\mathrm{x}), \mathrm{y})-\ell(f_{B_{S},R}(\mathrm{x}),\mathrm{y})\right|\right)\] \[\leq\mathbb{E}_{R}\left(\gamma\left\|f_{B_{S^{\prime}},R}( \mathrm{x})-f_{B_{S},R}(\mathrm{x})\right\|\right)\] \[=\gamma\mathbb{E}_{R}\left(\left\|\left[\left(\theta_{T,i}^{ \prime},\mathrm{x}\right)\right]_{i=1}^{M}-\left[\left(\theta_{T,i},\mathrm{x }\right)\right]_{i=1}^{M}\right\|\right)\] \[\leq\gamma\mathbb{E}_{R}\left(\|\theta_{T}^{\prime}-\theta_{T}\|\right) \tag{11}\] \[=\gamma\mathbb{E}_{R}\left[\Delta_{T}\right]\] \[\leq 2\eta\cdot\frac{bT\gamma^{2}}{N\epsilon}. \tag{12}\]
In the inequality (11), we assumed \(\|\mathrm{x}\|\leq 1\); that is the re-scaling technique that is common in computer vision. In the last inequality, \(\epsilon\) is a constant number between \(0\) and \(1\).
After showing the relation between the uniform stability of Adam and the Lipschitz constant of the loss function, we evaluate the bounded difference condition for the loss function with respect to the random sequence and a fixed training set. Suppose that \(R\) and \(R^{\prime}\) are two random sequences of batch indices to update the parameters in which only the location of two indices has been changed; that is if \(R=(\ldots,i,\ldots,j,\ldots)\) then \(R^{\prime}=(\ldots,j,\ldots,i,\ldots)\). Without loss of generality, assume \(1\leq i\leq\frac{k}{2}\) and \(\frac{k}{2}+1\leq j\leq k\). The probability of selecting two identical batches in the \(t\)-th iteration is \(1-\frac{4}{Tk^{2}}\). Thus, two updates of neural network parameters as \(\theta_{0}^{R},\theta_{1}^{R},\ldots,\theta_{T}^{R}\) and \(\theta_{0}^{R^{\prime}},\theta_{1}^{R^{\prime}},\ldots,\theta_{T}^{R^{\prime}}\) are made with the same initialization, \(\theta_{0}^{R}=\theta_{0}^{R^{\prime}}\). Let \(\Delta_{t}=\left\|\theta_{t}^{R}-\theta_{t}^{R^{\prime}}\right\|\). From Lemma 5.2, we have:
\[\Delta_{T}\leq\frac{8}{Tk^{2}}\cdot\frac{\eta T\gamma}{\epsilon}=\frac{8}{k^{2}} \cdot\frac{\eta\gamma}{\epsilon}.\]
According to Definition 3.4, we have:
\[\left|\ell(f_{B_{S},R^{\prime}}(\mathrm{x}),\mathrm{y})-\ell(f_{B _{S},R}(\mathrm{x}),\mathrm{y})\right|\] \[\leq\gamma\left\|f_{B_{S},R^{\prime}}(\mathrm{x})-f_{B_{S},R}( \mathrm{x})\right\|\] \[=\gamma\left\|\left[\left(\theta_{T,i}^{R^{\prime}},\mathrm{x} \right)\right]_{i=1}^{M}-\left[\left(\theta_{T,i}^{R},\mathrm{x}\right)\right]_{i =1}^{M}\right\|\] \[\leq\gamma\left\|\theta_{T}^{R^{\prime}}-\theta_{T}^{R}\right\| \tag{13}\] \[=\gamma\Delta_{T}\] \[\leq\frac{8}{k^{2}}\cdot\frac{\eta\gamma^{2}}{\epsilon}. \tag{14}\]
The inequality (13) has been obtained similar to (12). Replacing \(k\) by \(\frac{N}{b}\) in the inequality (14) leads to the inequality in the proposition.
**Theorem 5.5**.: _Let \(\ell(\mathrm{y},\mathrm{y})\) with the maximum value of \(L\) be convex and \(\gamma\)-Lipschitz. Assume Adam is run for \(T\) iterations with a learning rate \(\eta\) and batch size \(b\) to obtain \(f_{B_{S},R}\). Then we have the following upper bound for \(E(f_{B_{S},R})\) with probability at least \(1-\delta\):_
\[E(f_{B_{S},R})\leq\frac{2\eta}{c}\left(4\left(\frac{b\gamma}{N}\right)^{2} \sqrt{T\ log(2/\delta)}+\frac{bT\gamma^{2}}{N}\left(1+\sqrt{2N\log(2/\delta)} \right)\right)+L\sqrt{\frac{\log(2\delta)}{2N}}, \tag{15}\]
_in which \(c\in(0,1)\) is a constant number and \(N\) is the size of the training set._
**Proof.** In the work of [5], an upper bound for the generalization error of the output model trained by any optimization algorithm \(A_{opt}\) is established with probability at least \(1-\delta\), under the condition \(A_{opt}\) satisfies uniform stability measure with bound \(\beta\) and for each \(\mathrm{(x,y)}\), \(\ell(f_{B_{S},R}\mathrm{(x,y)}\mathrm{)}\) holds the \(\rho\)-BDC with regard to \(R\)3:
Footnote 3: In the assumptions of the main theorem in the work of [5], it has been stated that the model trained by stochastic gradient descent, but by studying the proof, we realize that their argument can be extended to any iterative algorithm that is \(\beta\)-uniformly stable because, in their proof, the upper bound has been derived independently of the update rule of stochastic gradient descent. The proof is available at [http://proceedings.mlr.press/v139/akbari21a/akbari21a-supp.pdf](http://proceedings.mlr.press/v139/akbari21a/akbari21a-supp.pdf).
\[E(f_{B_{S},R})\leq\rho\sqrt{T\log(2/\delta)}+\beta(1+\sqrt{2N\log(2/\delta)}) +L\sqrt{\frac{\log(2/\delta)}{2N}}. \tag{16}\]
By combining Theorem 5.4 and the inequality (16), we have the following upper bound with probability \(1-\delta\):
\[E(f_{B_{S},R})\leq\frac{2\eta}{c}\left(4\left(\frac{b\gamma}{N}\right)^{2} \sqrt{T\ log(2/\delta)}+\frac{bT\gamma^{2}}{N}\left(1+\sqrt{2N\log(2/\delta)} \right)\right)+L\sqrt{\frac{\log(2\delta)}{2N}}, \tag{17}\]
where \(c\in(0,1)\) is a constant number.
\(\Box\)
Theorem 5.5 shows how the generalization error bound of deep learning models trained by Adam depends on the Lipschitz constant \(\gamma\) and the maximum value \(L\). Furthermore, the inequality (15), implies the sensitivity of the generalization error to the batch size; when the batch size grows, \(E(f_{B_{S},R})\) increases. On the other hand, from the basics of machine learning, we know, if the batch size is too small, the parameters update is very noisy. Thus, an appropriate value should be considered for the batch size according to the training set size.
As we mentioned in Section 4 for the KL and GJM losses we have \(\gamma_{KL}\leq\gamma_{GJM}\) and \(L_{KL}\leq L_{GJM}\)[5]. Hence, following Theorem 5.5 we have the following corollary:
**Corollary 5.6**.: _Let \(f^{KL}_{B_{S},R}\) and \(f^{GJM}_{B_{S},R}\) be the output models trained by Adam optimizer using the KL and GJM loss functions respectively and the partition \(B_{S}\) obtained from the training set \(S\). We have_
\[E(f^{GJM}_{B_{S},R})\leq E(f^{KL}_{B_{S},R}).\]
**Proof.** We know if \(\alpha=0.5\), then the Lipschitz constant and the maximum value of GJM are less than the Lipschitz constant and the maximum value of KL respectively [5]. So, under the same settings for hyper-parameters of Adam and the same initialization, from Theorem 5.5, we have:
\[E(f^{GJM}_{B_{S},R})\leq E(f^{KL}_{B_{S},R}).\]
\(\Box\)
### AdamW Optimizer
The objective of regularization techniques is to control the network parameters' domain in order to prevent the over-fitting issue. \(L_{2}\)-regularization which exploits the \(L_{2}\) norm of the parameters vector, is more practical than \(L_{1}\) because it keeps the loss function differentiable and convex. In continuation, we study \(L_{2}\)-regularization and note its effect on SGD and Adam. The lack of significant effect of this technique on Adam led to AdamW 4[8].
Let \(\ell^{reg}(f^{\theta};B)\) be a regularized loss function computed on a mini-batch, \(B=\{(\mathrm{x}_{i},\mathrm{y}_{i})\}_{i=1}^{b}\):
\[\ell^{reg}(f^{\theta};B)=\frac{1}{b}\left(\sum_{i=1}^{b}\ell(f^{\theta}(\mathrm{ x}_{i}),\mathrm{y}_{i})+\frac{\lambda}{2}\left\|\theta\right\|^{2}\right), \tag{18}\]
where \(\left\|.\right\|\) is the \(L_{2}\) norm, \(\lambda\in\mathbb{R}^{+}\) is the weight decay and \(b\) is the batch size. According to the equation (18), to compute the parameters update in SGD, we have:
\[\theta_{t}=\left(1-\frac{\eta\lambda}{b}\right)\theta_{t-1}-\frac{\eta}{b} \sum_{i=1}^{b}\nabla_{\theta}\ell(f^{\theta}(\mathrm{x}_{i}),\mathrm{y}_{i}).\]
In SGD, minimizing the regularized loss function can improve the output model generalization. However, this technique, cannot be effective in Adam because it uses adaptive gradients to update the parameters [8]. In AdamW, the weight decay hyperparameter was decoupled from optimization steps taking gradient of the loss function. Let \(\widehat{m}_{t}\) and \(\widehat{v}_{t}\) denote the bias-corrected estimates illustrated in Subsection 5.1. The parameters update is computed as follows:
\[\theta_{t}=\theta_{t-1}-\alpha_{t}\left(\eta\cdot\frac{\widehat{m}_{t}}{( \sqrt{\widehat{v}_{t}}+\epsilon)}+\lambda\theta\right) \tag{19}\]
where \(\alpha_{t}\) is the schedule multiplier. The equation (19) exhibits that AdamW updates the parameters in a different way than Adam. Hence, we need to state theorems specific to AdamW for the stability and the generalization error. Consider \(\hat{M}(m_{t-1},\theta)\) and \(\hat{V}(v_{t-1},\theta)\) in the equations (6) and (7). According to update parameters statement of adamW in formula 19, The AdamW's update rule is defined as
\[A_{W}^{t}(\theta)=\theta-\alpha_{t}\left(\eta\cdot\frac{\hat{M}(m_{t-1}, \theta)}{\sqrt{\hat{V}(v_{t-1},\theta)}+\epsilon}+\lambda\theta\right). \tag{20}\]
where \(0\mid\mathrm{a}_{t}\lambda<1\) because otherwise, the update occurs in a wrong direction which means it goes away from the minimum. Consider the set of all possible values for the network parameters, \(H\subset\mathbb{R}^{K}\). Without loss of generality we can assume \(H\) is bounded 5. Let \(\left\|\theta\right\|_{\mathrm{sup}}=\sup_{\theta\in H}\left\|\theta\right\|\):
Footnote 5: We know the number of iterations, T is finite. Therefore the set of visible values for parameters in the training stage is finite. So we can assume the set of all possible values is an infinite bounded superset of visible values.
**Theorem 5.7**.: _Assume AdamW is executed for \(T\) iterations with a learning rate \(\eta\), batch size \(b\), weight decay \(\lambda\), and schedule multiplier \(\alpha_{t}\) to minimize the empirical risk in order to obtain \(f_{B_{S},R}\). Let \(\ell(\hat{\mathrm{y}},\mathrm{y})\) be convex and \(\gamma\)-Lipschitz. Then, Adam is \(\beta\)-uniformly stable with regard to the loss function \(\ell\), and for each \((\mathrm{x},\mathrm{y})\), \(\ell(f_{B_{S},R}(\mathrm{x}),\mathrm{y})\) holds the \(\rho\)-BDC with respect to \(R\). Consequently, we have_
\[\beta\leq\frac{2bT}{N}\sum_{t=1}^{T}\alpha_{t}\left(\frac{\eta\gamma^{2}}{c}+ \gamma\lambda\left\|\theta\right\|_{\mathrm{sup}}\right),\quad\rho\leq\frac{8 b^{2}}{N^{2}}\sum_{t=1}^{T}\alpha_{t}\left(\frac{\eta\gamma^{2}}{c}+\gamma \lambda\left\|\theta\right\|_{\mathrm{sup}}\right),\]
_in which \(c\in(0,1)\) is a constant number and \(N\) is the size of the training set._
**Proof.** First, we check the \(\sigma\)-boundedness of \(A_{W}^{t}(\theta)\):
\[\left\|\theta-A_{W}^{t}(\theta)\right\| =\left\|\alpha_{t}\left(\eta\cdot\frac{\hat{M}(m_{t-1},\theta)}{ \sqrt{\hat{V}(v_{t},\theta)}+\epsilon}+\lambda\theta\right)\right\|\] \[\leq\left\|\alpha_{t}\cdot\frac{\hat{M}(m_{t-1},\theta)}{\epsilon} \right\|+\alpha_{t}\lambda\left\|\theta\right\|\] \[=\alpha_{t}\left(\eta\cdot\frac{\left\|\hat{M}(m_{t-1},\theta) \right\|}{\epsilon}+\lambda\left\|\theta\right\|\right)\] \[\leq\alpha_{t}\left(\frac{\eta\gamma}{\epsilon}+\lambda\left\| \theta\right\|_{\mathrm{sup}}\right). \tag{21}\]
By applying Lemma 5.3, we concluded the inequality (21), which shows that \(A_{W}^{t}(\theta)\) is \(\sigma\)-bounded. Now we evaluate the \(\tau\)-expensiveness of AdamW. According to the formula (3), we have
\[\frac{\left\|A_{W}^{t}(\theta)-A_{W}^{t}(\theta^{\prime})\right\|}{ \left\|\theta-\theta^{\prime}\right\|}\] \[=\frac{\left\|-\alpha_{t}\left(\eta\cdot\frac{\bar{M}(m_{t-1}, \theta)}{\sqrt{\bar{V}(v_{t-1},\theta)+\epsilon}}+\lambda\theta\right)+\alpha_ {t}\left(\eta\cdot\frac{\bar{M}(m_{t-1},\theta^{\prime})}{\sqrt{\bar{V}(v_{t-1 },\theta^{\prime})+\epsilon}}+\lambda\theta^{\prime}\right)+\theta-\theta^{ \prime}\right\|}{\left\|\theta-\theta^{\prime}\right\|}. \tag{22}\]
As said in the proof of Theorem 5.4, for every \(\theta\in H\), we have \(\frac{\bar{M}(m_{t-1},\theta)}{\sqrt{\bar{V}(v_{t-1},\theta)}}\simeq\pm 1\) because \(|\mathbb{E}[g]|/\sqrt{\mathbb{E}[g^{2}]}\leq 1\). Therefore, the equation (22) is written as follows:
\[\frac{\left\|-\alpha_{t}\lambda\theta+\alpha_{t}\lambda\theta^{ \prime}+\theta-\theta^{\prime}\right\|}{\left\|\theta-\theta^{\prime}\right\|} =\frac{\left\|\alpha_{t}\lambda(\theta^{\prime}-\theta)+\theta- \theta^{\prime}\right\|}{\left\|\theta-\theta^{\prime}\right\|}\] \[=\frac{\left|1-\alpha_{t}\lambda\right\|\left\|\theta-\theta^{ \prime}\right\|}{\left\|\theta-\theta^{\prime}\right\|}\] \[=\left|1-\alpha_{t}\lambda\right|<1. \tag{23}\]
AdamW update rule in the equation (20) implies that \(0<\alpha_{t}\lambda<1\) which its consequent is the inequality (23). with an analogous demonstration to what we did in the proof of Theorem 5.4, i.e. considering update sequences and using Lemma 5.2 in order to evaluate the uniform stability and bounded difference condition according to their definitions, we conclude the following inequalities:
\[\beta \leq\frac{2bT}{N}\sum_{t=1}^{T}\alpha_{t}\left(\frac{\eta\gamma^{ 2}}{\epsilon}+\gamma\lambda\left\|\theta\right\|_{\sup}\right),\] \[\rho \leq\frac{8b^{2}}{N^{2}}\sum_{t=1}^{T}\alpha_{t}\left(\frac{\eta \gamma^{2}}{\epsilon}+\gamma\lambda\left\|\theta\right\|_{\sup}\right).\]
**Theorem 5.8**.: _Let \(\ell(\hat{\mathrm{y}},\mathrm{y})\) with the maximum value of \(L\) be convex and \(\gamma\)-Lipschitz. Assume AdamW is run for \(T\) iterations with a learning rate \(\eta\), batch size \(b\), weight decay \(\lambda\), and schedule multiplier \(\alpha_{t}\) to obtain \(f_{B_{S},R}\). Then we have the following upper bound for \(E(f_{B_{S},R})\) with probability at least \(1-\delta\):_
\[E(f_{B_{S},R})\leq\frac{2b}{N}\sum_{t=1}^{T}\alpha_{t}\left(\frac{\eta\gamma^{ 2}}{c}+\gamma\lambda\left\|\theta\right\|_{\sup}\right)\left(\frac{4b}{N} \sqrt{T\log(2/\delta)}+T\sqrt{2N\log(2/\delta)}\right)+L\sqrt{\frac{\log(2/ \delta)}{2N}}. \tag{24}\]
_in which \(c\in(0,1)\) is a constant number and \(N\) is the size of the training set._
**Proof.** By combining the equation (16) and Theorem 5.7 we conclude the proposition.
\(\Box\)
The inequality (24) implies that the generalization error growth of a DNN trained by AdamW, is directly related to the Lipschitz constant and the maximum value of a loss function. Following Theorem 5.8 we have the following corollary for the KL and GJM loss functions:
**Corollary 5.9**.: _Let \(f_{B_{S},R}^{KL}\) and \(f_{B_{S},R}^{GJM}\) be the output models trained by AdamW optimizer using the KL and GJM loss functions respectively using the partition \(B_{S}\) obtained from the training set \(S\). We have_
\[E(f_{B_{S},R}^{GJM})\leq E(f_{B_{S},R}^{KL}).\]
**Proof.** The proposition is concluded by Theorem 5.8 and an analogous argument to Corollary 5.6.
## 6 Experimental Evaluation
### Datasets
We use \(4\) datasets, including UTKFace [19], AgeDB [20], MegaAge-Asian [21], and FG-NET [22] to evaluate age estimation performance. UTKFace dataset contains \(23,708\) facial images, providing enough samples of all ages, ranging from \(0\) to \(116\) years-old. AgeDB contains \(16,488\) in-the-wild images in the age range from \(0\) to \(100\) years-old. MegaAge-Asian has been already split into MegaAge-Train and Mega-Age-Test datasets, containing \(40,000\) and \(3,945\) images respectively, belonging to Asian people with the age label in the range from \(1\) to \(69\) years-old. FG-NET dataset contains \(1,002\) facial images in the age range of \(0\) to \(69\) years. This dataset covers variations in pose, age expression, resolution, and lighting conditions. By collecting the samples from UTKFace, MegaAge-Train, and AgeDB datasets whose ages are in the range from \(0\) to \(100\) years-old, we create a new dataset called UAM, which includes \(80,174\) images. We use UTKFace and UAM as the training sets. FG-NET, MegaAge-Test, and \(10\%\) randomly selected from AgeDB called AgeDB-Test, are left as the test sets.
### Settings
All images are pre-processed by the following procedures: face detection and alignment are done by prepared modules in OpenCV package. All images are reshaped to the size of \(256\times 256\) and standard data augmentation techniques, including random cropping and horizontal flipping, are carried out during the training phase. We use two neural network architectures VGG16 [23] and ResNet50 [24], pre-trained on ImageNet [25] and VGGFace2 [26] datasets respectively, to estimate human age. VGGFace2 dataset was created with the aim of estimating human pose and age. With the same seed, the last layer of these models is replaced with a M-neurons dense layer with random weights. The last layer of VGG16 is trained on UTKFace in \(5\) epochs and the last layer of ResNet50 is trained on UAM in \(15\) epochs. M is set to \(116\) in VGG16 and \(101\) in ResNet50 model. We train the models via Adam and AdamW with learning rate \(2\times 10^{-5}\) for KL and \(10^{-4}\) for GJM 6. The batch size and AdamW's weight decay are set to \(64\) and \(0.9\) respectively. We set \(\beta_{1}=0.9\) and \(\beta_{2}=0.999\) for both Adam and AdamW as the authors of [1] and [8] suggested.
Footnote 6: In our experiments, when we set the learning rate to \(2\times 10^{-5}\) for the GJM loss, the ultimate model at the last epoch remained under-fit.
### Evaluation Metrics and Results
As the first observation, we measure the generalization error estimate in the training steps of ResNet50 trained by Adam and AdamW which is defined as
\[\hat{E}(f_{B_{S},R})=|R_{train}(f_{B_{S},R})-R_{val}(f_{B_{S},R})|,\]
where \(f_{B_{S},R}\) is the output model, \(R_{train}(f_{B_{S},R})\), \(R_{val}(f_{B_{S},R})\) are the average of loss values on the training and validation sets respectively. The results of this experiment are shown in Figure 1 and Figure 2. In the first epochs, the models are still under-fit and the loss is far from its minimum; therefore, \(\hat{E}(f_{B_{S},R})\) does not give us critical information about the generalization error, but in the rest of epochs, when the experimental loss of the models approaches its minimum, \(\hat{E}(f_{B_{S},R})\) can represent the generalization error. As can be seen in Figure 0(a) and Figure 0(a), after epoch \(5\) or \(6\) the generalization error estimate of the models trained by Adam and AdamW using the GJM loss function is lower than the models trained using the KL loss.
In addition, we measure the generalization performance in terms of Mean Absolute Error (MAE) and Cumulative Score (CS). Consider the training set \(S\), and the test set \(S_{test}\in(X\times Y)^{D}\). Let \((\mathrm{x}_{k},y_{k})\in S_{test}\) represents a test example where \(y_{k}\in\mathbb{R}\) is the label of \(k\)-th example of the test set. Since we use label distribution learning, for each \((\mathrm{x},\mathrm{y})\in S\), \(\mathrm{y}\in\mathbb{R}^{\mathrm{M}}\) is the probability distribution corresponding to \(\mathrm{x}\). Therefore, in the evaluation phase, the output of the model per the test example \(\mathrm{x}_{k}\) is the predicted probability distribution \(\hat{\mathrm{y}}_{k}=[\hat{y}_{k,1},\hat{y}_{k,2},\ldots,\hat{y}_{k,\mathrm{M}}]\). MAE is defined as \(\frac{1}{D}\sum_{k=1}^{D}|\hat{l}_{k}-l_{k}|\) where \(\hat{l}_{k}\) is the index of the largest element of \(\hat{\mathrm{y}}_{k}\) and \(l_{k}\) is the true label. CS is defined as \(\frac{D_{I}}{D}\times 100\%\) where \(D_{I}\) is the number of test samples such that \(|\hat{l}_{k}-l_{k}|<I\). Commonly, the value of \(I\) is set to 5 [5][27].
The results are reported in Tables 1-3. The ResNet50 models are more accurate than the VGG16 models because the version of VGG16 is pre-trained on ImageNet dataset which is not suitable for age estimation. Tables 1- 3 show that when we train a DNN by Adam or AdamW, the GJM loss performs better than the KL loss. |
2303.03848 | Parareal with a physics-informed neural network as coarse propagator | Parallel-in-time algorithms provide an additional layer of concurrency for
the numerical integration of models based on time-dependent differential
equations. Methods like Parareal, which parallelize across multiple time steps,
rely on a computationally cheap and coarse integrator to propagate information
forward in time, while a parallelizable expensive fine propagator provides
accuracy. Typically, the coarse method is a numerical integrator using lower
resolution, reduced order or a simplified model. Our paper proposes to use a
physics-informed neural network (PINN) instead. We demonstrate for the
Black-Scholes equation, a partial differential equation from computational
finance, that Parareal with a PINN coarse propagator provides better speedup
than a numerical coarse propagator. Training and evaluating a neural network
are both tasks whose computing patterns are well suited for GPUs. By contrast,
mesh-based algorithms with their low computational intensity struggle to
perform well. We show that moving the coarse propagator PINN to a GPU while
running the numerical fine propagator on the CPU further improves Parareal's
single-node performance. This suggests that integrating machine learning
techniques into parallel-in-time integration methods and exploiting their
differences in computing patterns might offer a way to better utilize
heterogeneous architectures. | Abdul Qadir Ibrahim, Sebastian Götschel, Daniel Ruprecht | 2023-03-07T12:30:05Z | http://arxiv.org/abs/2303.03848v2 | # Parareal with a physics-informed neural network as coarse propagator+
###### Abstract
Parallel-in-time algorithms provide an additional layer of concurrency for the numerical integration of models based on time-dependent differential equations. Methods like Parareal, which parallelize across multiple time steps, rely on a computationally cheap and coarse integrator to propagate information forward in time, while a parallelizable expensive fine propagator provides accuracy. Typically, the coarse method is a numerical integrator using lower resolution, reduced order or a simplified model. Our paper proposes to use a physics-informed neural network (PINN) instead. We demonstrate for the Black-Scholes equation, a partial differential equation from computational finance, that Parareal with a PINN coarse propagator provides better speedup than a numerical coarse propagator. Training and evaluating a neural network are both tasks whose computing patterns are well suited for GPUs. By contrast, mesh-based algorithms with their low computational intensity struggle to perform well. We show that moving the coarse propagator PINN to a GPU while running the numerical fine propagator on the CPU further improves Parareal's single-node performance. This suggests that integrating machine learning techniques into parallel-in-time integration methods and exploiting their differences in computing patterns might offer a way to better utilize heterogeneous architectures.
Keywords:Parareal parallel-in-time integration PINN Machine learning GPUs heterogeneous architectures
## 1 Introduction
Models based on differential equations are ubiquitous in science and engineering. High-resolution requirements, often due to the multiscale nature of many problems, typically require that these models are run on high-performance computers
to cope with memory demand and computational cost. Spatial parallelization is already a widely used and effective approach to parallelize numerical algorithms for partial differential equations but, on its own, will not deliver enough concurrency for extreme-scale parallel architectures. Parallel-in-time integration algorithms can help to increase the degree of parallelism in numerical models. Combined space-time parallelization can improve speedup over spatial parallelization alone on hundreds of thousands of cores [24].
Parallel-in-time methods like Parareal [14], PFASST [4] or MGRIT [5] rely on serial coarse level integrators to propagate information forward in time. These coarse propagators constitute an unavoidable serial bottleneck which limits achievable speedup. Therefore, the coarse-level integrators must be as fast as possible. However, these methods are iterative and speedup will also decrease as the number of iterations goes up. A coarse propagator that is too inaccurate, even when computationally cheap, will not provide good speedup because the number of required iterations will be too large. Hence, a good coarse propagator needs to be at least somewhat accurate but also needs to run as fast as possible. This trade-off suggests that using neural networks as coarse propagators could be promising: once trained, they are very fast to evaluate while still providing reasonable accuracy. Furthermore, neural networks are well suited for running on GPUs whereas mesh-based discretizations are harder to run efficiently because of their lower computational intensity. Therefore, algorithms featuring a combination of mesh-based components and neural network components would be well suited to run on heterogeneous systems combining CPUs and GPUs or other accelerators.
Our paper makes three novel contributions. It (i) provides the first study of using a PINN as a coarse propagator in Parareal, (ii) shows that a PINN as a coarse propagator can accelerate Parareal convergence and improve speedup and (iii) illustrates that moving the PINN coarse propagator to a GPUs improves speedup further. While we demonstrate our approach for the Black-Scholes equation, a model from computational finance, the idea is transferable to other types of partial differential equations where Parareal was shown to be effective. We only investigate performance on a single node with one GPU. Extending the approach to parallelize in time across multiple nodes and to work in combination with spatial parallelization left for future work. The code used to generate the results shown in this paper is freely available [19].
## 2 Related Work
Using machine learning (ML) to solve differential equations has become an active field of research. Some papers aim to entirely replace the numerical solver by neural networks [21, 25]. Physics-informed neural networks (PINNs) [20], which use the residual of a partial differential equation (PDE) as well as boundary- and initial conditions in the loss function, are used in many applications. This includes a demonstration for the Black Scholes equation (1), showing that a PINN is capable of accurately pricing a range of options with complex payoffs,
and is significantly faster than traditional numerical methods [23]. However, solving differential equations with ML alone generally does not provide the high accuracy that can be achieved by numerical solvers. This has led to a range of ideas where ML is used as an ingredient of classical numerical methods instead and not as a replacement [9].
Specific to parallel-in-time integration methods, there are two research directions aiming to connect them with machine learning. On the one hand, there are attempts to use ML techniques to improve parallel-in-time algorithms. Our paper falls into this category. Using a neural network as coarse propagator for Parareal has been studied in two previous papers. Yalla and Enquist [26] were the first to explore this approach. They use a neural network with one hidden layer of size 1000 and demonstrate for a high dimensional oscillator that it helps Parareal converge faster compared to a numerical coarse propagator. However, no runtimes or speedups are reported. Agboh et al. [1] use a feed-forward deep neural network as a coarse propagator to integrate an ordinary differential equation modeling responses to a robot arm pushing multiple objects. They also observe that the trained coarse propagator improves Parareal convergence compared to a simplified analytical coarse model. Nguyen and Tsai [17] do not fully replace the numerical coarse propagator but use supervised learning to enhance its accuracy for wave propagation modeling. They observe that this enhances stability and accuracy of Parareal, provided the training data contains sufficiently representative examples. Gorynina et al. [6] study the use of a machine-learned spectral neighbor analysis potential in molecular dynamics simulations with Parareal.
A few papers go the opposite way and adopt ideas from parallel-in-time integration methods to parallelize and accelerate the process of training deep neural networks. Gunther et al. [7] use a nonlinear multi-grid method to improve the training process of a deep residual network. They use MGRIT, a multi-level generalization of Parareal, to obtain layer-parallel training on CPUs, reporting a speedup of up to 8.5 on 128 cores. Kirby et al. [11] extend their approach to multiple GPUs, obtaining further performance gains. In a similar way, Meng et al. [16] use Parareal to generate starting values for a series of PINNs to help with the training process. Motivated by the observation that it becomes expensive to train PINNs that integrate over long time intervals, they concatenate multiple short-time PINNs instead. They use a cheap numerical coarse propagator and a Parareal iteration to connect these PINNs with each PINN inheriting the parameters from its predecessor. While they mention the possibility of using a PINN as coarse propagator, they do not pursue this idea further in their paper. Lorin [15] derives a parallel-in-time variant of neural ODEs to improve training of deep Residual Neural Networks. Finally, Lee et al. [13] use a Parareal-like procedure to train deep neural networks across multiple GPUs.
## 3 Algorithms and Benchmark Problem
The Black-Scholes equation is a widely used model to price options in financial markets [3]. It is based on the assumption that the price of an asset follows a
geometric Brownian motion, so that the log-returns of the asset are normally distributed. Closed form solutions exist for the price of a European call or put option [12], but not for more complex options such as American options or options with multiple underlying assets. To be able to compute numerical errors, we thus focus on the European call option, a financial derivative that gives the buyer the right, but not the obligation, to buy an underlying asset at a predetermined price (the strike price) on or before the expiration date. The price \(V\) of the option can be modeled by
\[f(V)=\frac{\partial V}{\partial t}(S,t)+\frac{1}{2}\sigma^{2}S^{2}\frac{ \partial^{2}V}{\partial S^{2}}(S,t)+rS\frac{\partial V}{\partial S}(S,t)-rV(S, t)=0, \tag{1}\]
where \(S\) denotes the current value of the underlying asset, \(t\) is time, \(r\) denotes the no-risk interest rate (for example saving rates in a bank) and \(\sigma\) denotes the volatility of the underlying asset. To fully determine the solution to (1), we impose a final state at expiry time \(t=T\) and two boundary conditions with respect to \(S\), motivated by the behaviour of the option at \(S=0\) and as \(S\to\infty\). For the call option, the expiry time condition is
\[V(T,S)=\max(S-K,0)\text{ for all }S. \tag{2}\]
If the underlying asset becomes worthless, then it will remain worthless, so the option will also be worthless. Thus,
\[V(t,0)=0\text{ for all }t. \tag{3}\]
On the other hand, if \(S\) becomes very large, then the option will almost certainly be exercised, and the exercise price is negligible compared to \(S\). Thus, the option will have essentially the same value as the underlying asset itself and
\[V(t,S)\sim 0\text{ as }S\to\infty,\text{ for fixed }t. \tag{4}\]
For the European call option, we select an interval of \(t=0\) and \(T=1\) and an artificial bound for the asset of \(S=5000\).
### Parareal
Parareal is an iterative algorithm to solve an initial value problem of the form
\[V^{\prime}(t)=\phi(V(t)),\ t\in[0,T],\ V(0)=V_{0}, \tag{5}\]
where in our case the right hand side function \(\phi\) stems from the discretization of the spatial derivatives in (1). Note that the coefficients in (1) do not depend on time, so we can restrict our exposition to the autonomous case. Decompose the time domain \([0,T]\) into \(N\) time-slices \([T^{n},T^{n+1}]\), \(n=0,\ldots,N-1\). Denote as \(\mathcal{F}\) a numerical time stepping algorithm with constant step size \(\delta t\) and high accuracy and as
\[V_{n+1}=\mathcal{F}(V_{n})\]
the result of integrating from some initial value \(V_{n}\) at the start time \(T^{n}\) of a time slice until the end time \(T^{n+1}\). Classical time stepping corresponds to evaluating (6) for \(n=0,\ldots,N-1\) in serial. Parareal replaces this serial procedure with the iteration
\[V_{n+1}^{k+1}=\mathcal{G}(V_{n}^{k+1})+\mathcal{F}(V_{n}^{k})-\mathcal{G}(V_{n} ^{k}) \tag{7}\]
where \(k=1,\ldots,K\) counts the iterations. The key in (7) is that the computationally expensive evaluation of \(\mathcal{F}\) can be parallelized across all \(N\) time slices. Here, we always assume that \(P=N\) many processes are used and each process holds a single time slice. A visualization of the Parareal workflow as well as pseudocode can be found in the literature [22]. As \(k\to N\), \(V_{n}^{k}\) converges to the same solution generated by serial evaluation of (6). However, to achieve speedup, we require convergence in \(K\ll N\) iterations. An upper bound for speedup achievable with Parareal using \(P\) processors to integrate over \(N=P\) time slices is given by
\[s_{\mathrm{bound}}(P)=\frac{1}{\left(1+\frac{K}{P}\right)\frac{c_{\mathrm{c}}} {c_{\mathrm{f}}}+\frac{K}{P}} \tag{8}\]
where \(K\) is the number of iterations, \(c_{\mathrm{c}}\) the runtime of \(\mathcal{G}\) and \(c_{\mathrm{f}}\) the runtime of \(\mathcal{F}\)[22]. Since (8) neglects overhead and communication, it is an upper bound on achievable speedups and measured speedups will be lower.
### Numerical solution of the Black-Scholes equation
We approximate the spatial derivatives in (1) by second order centered finite differences on an equidistant mesh
\[0=S_{0}<S_{1}<\ldots<S_{N}=L \tag{9}\]
with \(S_{i+1}-S_{i}=\Delta S\) for \(i=0,\ldots,N-1\). For the inner nodes, we obtain the semi-discrete initial value problem
\[V_{j}^{{}^{\prime}}(t)=-\frac{1}{2}\sigma^{2}S_{j}^{2}\frac{V_{j+1}-2V_{j}+V_{ j-1}}{\Delta S^{2}}-rS_{j}\frac{V_{j+1}-V_{j-1}}{2\Delta S}+rV_{j} \tag{10}\]
with \(j=1,\ldots,\). This is complemented by the boundary condition \(V_{0}=0\) for a zero asset value. We also impose the asymptotic boundary condition (4) at finite distance \(L\) so that \(V_{N}=0\). In time, we use a second order Crank-Nicolson method for \(\mathcal{F}\) and a first order implicit Euler method as numerical \(\mathcal{G}\). Since we have a final condition instead of an initial condition, we start at time \(T=1\) and solve the problem backwards. We use 200 steps for the fine method and 100 steps for the coarse.
### Physics Informed Neural Network (PINN)
The PINN we use as coarse propagator gets a time slice \([t_{\mathrm{start}},t_{\mathrm{end}}]\subset[0,T]\), the asset price \(V\) at \(t_{\mathrm{start}}\) and stock values \(S\), and outputs the predicted state
of the asset price \(\tilde{V}\) at \(t_{\rm end}\). To train it, we define three sets of collocation points in time and stock price: \((S_{i},t_{i}),i=1,\ldots N_{f}\) in the interior of the space-time domain for evaluating the residual \(f(V)\) of the Black-Scholes eqation (1), \((S_{i},t_{i}),i=1,\ldots N_{b}\) collocation points on the boundary to evaluate (2), and \(S_{i},i=1,\ldots N_{\rm exp}\) for the final state conditions (3), (4). The loss function to be minimized is given by
\[{\rm MSE}_{\rm total}={\rm MSE}_{f}+{\rm MSE}_{\rm exp}+{\rm MSE}_{b}, \tag{11}\]
consisting of a term to minimize the PDE residual \(f(V)\)
\[{\rm MSE}_{\rm f}=\frac{1}{N_{f}}\sum_{i=1}^{N_{f}}|f(\tilde{V}(t_{i},S_{i}))| ^{2}, \tag{12}\]
the boundary loss term
\[{\rm MSE}_{\rm b}=\frac{1}{N_{b}}\sum_{i=1}^{N_{b}}\left|\tilde{V}(t_{i},S_{i} )-V(t_{i},S_{i})\right|^{2}, \tag{13}\]
and the loss at expiration
\[{\rm MSE}_{\rm exp}=\frac{1}{N_{\rm exp}}\sum_{i=1}^{N_{\rm exp}}\left|\tilde{ V}(T,S_{i})-\max(S_{i}-K,0)\right|^{2}, \tag{14}\]
For our setup, we randomly generate \(N_{f}=100,000\) collocation points within the domain \([0,5000]\times[0,1]\), \(N_{b}=10,000\) collocation points at the boundary \([0,1]\) and \(N_{\rm exp}=10,000\) collocation points to sample the expiration condition over \([0,5000]\). The derivatives that are required to compute the PDE loss are calculated by automatic differentiation [2]. We compute the PDE residual (12) over the points inside the domain, the boundary condition loss (13) over the spatial boundary and the expiration loss (14) over the end points. The sum of the three forms the total loss function (11). Figure 1 shows a subset of the generated collocation points to illustrate the approach.
The neural network consists of 10 fully connected layers with 50 neurons in each and was implemented using Pytorch [18]. Figure 2 shows the principle of a PINN but for a smaller network for the sake of readability. Every linear layer, excluding the output layer, is followed by the ReLU activation function. The weights for the neural network are initialized using Kaiming [8]. We focus here on a proof-of-concept and have not undertaken a systematic effort to optimize the network architecture but this would be an interesting avenue for future research.
We used the Adam optimizer [10] with a learning rate of \(10^{-2}\) for the initial round of training for 5000 epochs, followed by a second round of training with a learning rate of \(10^{-3}\) for 800 epochs. The training data (collocation points) was shuffled during every epoch to prevent the model from improving predictions based on data order rather than the underlying patterns in the data. Table 1 shows the behavior of the three loss function terms. The total training time for this model was around 30 minutes.
## 4 Results
The numerical experiments were conducted on OpenSUSE Leap 15.4 running an Intel Core 24 x 12th Gen Intel i9-12900K with a base clock speed of 3.2 GHz and a maximum turbo frequency of 5.2 GHz, with 62.6 GiB of RAM and an NVIDIA GeForce RTX 3060/PCIe/SSE2 GPU. Implementations were done using Python 3.10, pytorch1.13.1+cu117, mpi4py3.1.4, as well as numba0.55.1 for the GPU runs.
converges very quickly. Although PINN and NN are slightly more accurate than the numerical coarse propagator, the impact on convergence is small. After one iteration, the iteration error of Parareal is smaller than the discretization error of the fine method. After \(K=3\) iterations, Parareal has reproduced the fine solution up to round-off error. Below, we report runtimes and speedup for \(K=3\). With only a single iteration, the \(K/P\) term in (8) is less important and reducing the runtime of the coarse propagator increases overall speedup even more. Therefore, the case with \(K=3\) is the case where switching to the coarse propagator will yield less improvement.
Generalization.Figure 4 shows how Parareal with a PINN coarse propagator converges if applied to (1) with parameters different from those for which the PINN was trained. As parameters become increasingly different from the training values, the coarse propagator will become less accurate. However, if Parareal converges, it will produce the correct solution since the numerical fine propagator always uses the correct parameters. The combination of Parareal + PINN generalizes fairly well. Even for parameters more than ten times larger than the training values it only requires one additional iteration to converge. While the additional iteration will somewhat reduce achievable speedup as given by (8), the performance results presented below should not be overly sensitive to changes in the model parameters.
Parareal runtimes and speedup.Reported runtimes are measured using the time command in Linux and include the time required for setup, computation and data movement. Table 2 shows the runtime in milliseconds of Parareal using \(P=16\) cores for four different coarse propagator configurations. Shown are averages over five runs as well as the standard deviation. Replacing the numerical coarse propagator with a PINN on a CPU reduces Parareal execution time by a factor of 2.4, increasing to 2.9 if the PINN is run on a GPU. For the numerical coarse propagator, using the GPU offers no performance gain because the resolution
Figure 2: Structure of the PINN. The network takes the time \(t_{\text{start}},t_{\text{end}}\), asset values \(V\) and stock values \(S\) as input and returns the predicted asset values \(\tilde{V}\) at \(t_{\text{end}}\). The loss function encodes the PDE, the expiration condition and the boundary conditions. Figure produced using [https://alexlenail.me/NN-SVG/index.html](https://alexlenail.me/NN-SVG/index.html).
and thus computational intensity is not high enough. The much faster coarse propagator provided by the PINN significantly reduces the serial bottleneck in Parareal and will, as demonstrated below, yield a marked improvement in speedup.
Table 3 shows runtimes for the full Parareal iteration averaged over five runs. The fastest configuration is the one that runs the numerical fine propagator on the CPU and the PINN coarse propagator on the GPU. Executing both fine and coarse propagator on the CPU takes about a factor of three longer. Importantly, moving both to the GPU, while somewhat faster than running all on the CPU, is slower than the mixed version by a factor of about two. The full GPU variant will eventually be faster if the resolution of the fine and coarse
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Epoch** & **Expiration** & **Boundary** & **Residual** \\ \hline
0 & \(9.21\times 10^{2}\) & \(9.21\times 10^{2}\) & \(7.33\times 10^{3}\) \\
2000 & \(5.58\times 10^{-1}\) & \(3.45\times 10^{-2}\) & \(2.50\times 10^{-2}\) \\
4000 & \(4.11\times 10^{-2}\) & \(2.34\times 10^{-2}\) & \(5.00\times 10^{-3}\) \\
5000 & \(5.92\times 10^{-1}\) & \(1.34\times 10^{-2}\) & \(4.22\times 10^{-3}\) \\ \hline
5300 & \(4.19\times 10^{-2}\) & \(3.22\times 10^{-3}\) & \(1.94\times 10^{-4}\) \\
5500 & \(6.46\times 10^{-4}\) & \(1.96\times 10^{-4}\) & \(5.73\times 10^{-5}\) \\
5800 & \(2.92\times 10^{-5}\) & \(1.14\times 10^{-5}\) & \(3.19\times 10^{-4}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Evolution of the loss function during network training. The three columns show the MSE for the three terms of the loss function related to the end condition (2), boundary conditions (3) and (4) and residual (1). After 5000 epochs with training rate \(10^{-2}\), another 800 epochs of training with a reduced training rate of \(10^{-3}\) were performed.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & **Numerical** & **PINN** & Speedup over CPU-Numerical \\ \hline CPU & \(3.48\pm 0.056\) & \(1.47\pm 0.073\) & 2.4 \\ GPU & \(3.99\pm 0.651\) & \(1.21\pm 0.041\) & 2.9 \\ Speedup & – & 1.21 & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Runtime \(c_{\text{c}}\) in milliseconds of the coarse propagator \(\mathcal{C}\) averaged over five runs plus/minus standard deviation.
propagator are both extremely high. However, the current resolution already produces an error of around \(10^{-3}\) which will be sufficient in most situations. This illustrates how a combination of numerical method and PINN within Parareal can not only improve performance due to the lower cost of the PINN but also help to better utilize a node that features both CPUs and GPUs or even neural network accelerators. Thus, the different computing patters in finite difference numerical methods and neural networks can be turned into an advantage.
Figure 5 shows runtimes for Parareal with both a PINN and numerical coarse propagator on a CPU (left) and GPU (right) against the number of cores/time slices \(P\). The numerical fine propagator is always run on the CPU. In both
Figure 3: Normalized \(\ell_{2}\)-error over time of coarse and fine propagator against the analytical solution (left). Normalized \(\ell_{2}\)-error against the serial fine solution versus number of iterations for three different variants of Parareal (right). The black line (squares) is Parareal with a numerical coarse propagator, the green line (diamonds) is Parareal with a neural network as coarse propagator that is trained only on data while the blue line (circles) is Parareal with a PINN as coarse propagator that also uses the terms of the differential equation in the loss function. Parareal uses \(P=16\) time slices in all cases.
\begin{table}
\begin{tabular}{l c c} \hline \hline & **CPU-Coarse** & **GPU-Coarse** \\ \hline
**CPU-Fine** & \(128.48\pm 0.715\) & \(41.241970\pm 0.334\) \\
**GPU-Fine** & \(83.2545\pm 0.356\) & \(87.45234\pm 0.253\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Runtimes in milliseconds for Parareal averaged over five runs plus/minus standard deviation.
cases, runtimes decrease at a similar rate as the number of time slices/cores \(P\) increases. The numerical coarse propagator is consistently slower than the PINN and the gap is similar on the CPU and GPU.
Finally, Figure 6 shows the speedup (left) and parallel efficiency (right) for Parareal with a numerical, PINN-CPU and PINN-GPU coarse propagator. The speedup bounds (8) are shown as lines. Moving from a numerical coarse propagator to a PINN and moving the PINN from the CPU to a GPU each improves speedup significantly. For the numerical coarse propagator, Parareal achieves a speedup of around \(S(16)\approx 2\). Replacing the numerical integrator with a PINN improves speedup to \(S(16)\approx 3\). Running this PINN on a GPU again improves speedup to \(S(16)\approx 4.5\), more than double what we achieved with the numerical coarse propagator on a CPU. The improvements in speedup translate into increased parallel efficiency, which improves from around \(30\%\) for the numerical coarse propagator to around \(60\%\) for the PINN-GPU coarse method. For smaller numbers of processors, the gains in speedup are less pronounced, because the \(K/P\) term in (8) is more dominant. But gains in parallel efficiency are fairly consistent from \(P=2\) cores to \(P=16\) cores. In summary, this demonstrates that replacing a CPU-run numerical coarse propagator with a GPU-run PINN can greatly improve the performance of Parareal by minimizing the serial bottleneck from the coarse propagator.
## 5 Discussion
Parareal is a parallel-in-time method that iterates between a cheap coarse and a parallel expensive fine integrator. To maintain causality, the coarse propagator
Figure 4: Convergence of Parareal for different interest rates \(r\) (left) and volatilities \(\sigma\) (right). In all cases, the coarse propagator is the PINN trained for values of \(r=0.03\) and \(\sigma=0.4\). Even for parameter values more than ten times larger than the ones for which the PINN was trained, Parareal requires only one additional iteration to converge to within machine precision of the fine integrator.
needs to run in serial and therefore reflects a bottleneck that limits achievable speedup. Mostly, coarse propagators are similar to fine propagators and build using numerical methods but with lower order, lower resolution or, in some cases, models of reduced complexity. We investigate the use of a physics-informed neural network (PINN) instead. The PINN is shown to be slightly more accurate than a numerical coarse propagator but a factor of three faster. Using it does not affect convergence speed of Parareal but greatly reduces the serial bottleneck from the coarse propagator.
We show that, on a single node with one GPU, a combination of a numerical fine propagator run on a CPU with a PINN coarse propagator run on a GPU provides more than twice the speedup than vanilla Parareal using a numerical coarse propagator run on the CPU. Also, we demonstrate that moving both fine and coarse propagator to the GPU is slower than moving just the PINN coarse method to the GPU and keeping the numerical fine method on the CPU. The reason is that unless the resolution of the fine propagator is extremely high, its low computational intensity means there is little gain from computing on a GPU and so overheads from data movement are dominant. By contrast, evaluating PINNs is well suited for GPU computation. Our results demonstrate that using PINNs to build coarse level models for parallel-in-time methods is a promising approach to reduce the serial bottleneck imposed by causality. They also suggest that parallel-in-time methods featuring a combination of numerical algorithms and neural networks might be useful to better utilize heterogeneous systems.
|
2307.12518 | FaFCNN: A General Disease Classification Framework Based on Feature
Fusion Neural Networks | There are two fundamental problems in applying deep learning/machine learning
methods to disease classification tasks, one is the insufficient number and
poor quality of training samples; another one is how to effectively fuse
multiple source features and thus train robust classification models. To
address these problems, inspired by the process of human learning knowledge, we
propose the Feature-aware Fusion Correlation Neural Network (FaFCNN), which
introduces a feature-aware interaction module and a feature alignment module
based on domain adversarial learning. This is a general framework for disease
classification, and FaFCNN improves the way existing methods obtain sample
correlation features. The experimental results show that training using
augmented features obtained by pre-training gradient boosting decision tree
yields more performance gains than random-forest based methods. On the
low-quality dataset with a large amount of missing data in our setup, FaFCNN
obtains a consistently optimal performance compared to competitive baselines.
In addition, extensive experiments demonstrate the robustness of the proposed
method and the effectiveness of each component of the model\footnote{Accepted
in IEEE SMC2023}. | Menglin Kong, Shaojie Zhao, Juan Cheng, Xingquan Li, Ri Su, Muzhou Hou, Cong Cao | 2023-07-24T04:23:08Z | http://arxiv.org/abs/2307.12518v1 | # FaFCNN: A General Disease Classification Framework Based on Feature Fusion Neural Networks
###### Abstract
There are two fundamental problems in applying deep learning/machine learning methods to disease classification tasks, one is the insufficient number and poor quality of training samples; another one is how to effectively fuse multiple source features and thus train robust classification models. To address these problems, inspired by the process of human learning knowledge, we propose the Feature-aware Fusion Correlation Neural Network (FaFCNN), which introduces a feature-aware interaction module and a feature alignment module based on domain adversarial learning. This is a general framework for disease classification, and FaFCNN improves the way existing methods obtain sample correlation features. The experimental results show that training using augmented features obtained by pre-training gradient boosting decision tree yields more performance gains than random-forest based methods. On the low-quality dataset with a large amount of missing data in our setup, FaFCNN obtains a consistently optimal performance compared to competitive baselines. In addition, extensive experiments demonstrate the robustness of the proposed method and the effectiveness of each component of the model1.
Disease classification, Neural networks, Feature fusion, Domain adversarial learning
Footnote 1: Accepted in IEEE SMC2023
## I Introduction
With the ability to use large amounts of precisely labelled data, deep neural network-based approaches have achieved exciting results for tasks such as e-commerce recommendation systems [1], image classification [2], and object detection [3]. However, many tasks in the medical field tend to have insufficient samples and a large amount of missing data, which makes it extremely difficult to develop a general deep-learning framework for disease classification tasks in the medical field [4][5]. In addition, the records corresponding to patients in hospital databases often involve demographic features, clinical features, radiological features, and other diagnostic metrics from multiple sources; there are often scale inconsistencies and information redundancy among these data, and using them together to train machine learning models may compromise the interpretability and robustness of the models [6]. In summary, there are two fundamental problems in applying deep learning/machine learning methods to disease classification tasks: (1) the insufficient number and poor quality of training samples; (2) how to effectively fuse multiple source features and thus train robust classification models.
To address these problems, inspired by the process of human learning knowledge, some researchers [7, 8, 9] propose to augment the feature representation of each sample using the features of similar samples in the training set, i.e., introducing sample correlation features as an extension of existing features. As in [7], the authors make an adjustment to the coupled two-stage modelling by directly using the prediction probabilities of the random forest (RF) model as correlation features with the original features as input, using a DNN with a two-tower structure to map the two parts of features separately, and finally making predictions based on the summation of high-level features. However, the prediction probability of the RF model of each sample is not enough to characterize the similarity with other samples in the training set, which will impair the performance of the model. The literature [8] proposes a graph generation method for medical datasets based on sample paths of a pre-trained random forest (RF) model, transforming structured data into graph data and training a graph convolutional network for node classification to achieve accurate differentiation of Crohn's disease and intestinal tuberculosis. Nevertheless, this method relies heavily on artificial thresholds to determine the edges between nodes when constructing graph data, which leads to poor robustness of the framework.
AI has shown powerful potential in the field of data-driven medical fields. Esteva et al. [4] elaborated on the application prospects of various methods in the field of deep learning in the medical field from four aspects: computer vision, natural language processing, reinforcement learning and generalized deep learning methods. Rauschert et al. [10] briefly summarized the current state of Machine Learning (ML), and showed that recent advances in deep learning offer greater promise in helping physicians achieve accurate diag
noses. For example, Lima et al. proposed FSTBSVM [11], a twin-bounded SVM classifier combined with a scalable feature selection method. And then, Kuma et al. proposed a classification algorithm that combines \(k\)-nearest neighbour and genetic algorithm [12]; Gu et al. proposed a fuzzy support machine with the Gaussian kernel as well as linear kernel [13]. However, due to the problems of small sample size and incomplete data in medical datasets, existing studies basically design special classification algorithms for specific disease classification tasks and basically follow the paradigm of feature selection plus machine learning model prediction. At present, there is no unified and generalized framework for auxiliary diagnosis of medical diseases.
Considering the advantages and disadvantages of existing methods, we propose the **F**eature-**a**ware **F**usion **C**orrelation **N**eural **N**etwork (FaFCNN), a general framework for disease classification. Specifically, we keep the idea of using an agent model to obtain sample correlation features to realize feature augmentation from existing methods, while the sample correlation features are acquired based on the positions of samples in the leaf nodes of a pre-trained gradient boosting decision tree (GBDT). It is experimentally demonstrated that the augmented features obtained by our method capture more accurate sample correlation than the RF-based augmented features, and further improve the model performance. In order to further improve the performance of disease classification models on low-quality datasets, FaFCNN considers the correlation of features in addition to the correlation of samples, and introduces a feature-aware interaction module (FaIM) and a feature alignment module (FAM) based on domain adversarial learning to achieve more efficient feature fusion and model performance.
The contributions of this paper are listed below:
* We propose FaFCNN, a generic deep learning-based framework for disease classification, and our method obtains a consistently optimal performance compared to competitive baselines.
* We improve the way existing methods obtain sample correlation features, training using augmented features obtained by pre-training GBDT yields more performance gains than RF-based methods.
* In the feature fusion approach, the feature alignment module based on domain adversarial learning introduced by FaFCNN alleviates the performance degradation caused by the naive summation of existing methods.
* We synthesise low-quality datasets by adding different levels of perturbation on four public datasets. Extensive experiments demonstrate the robustness of our proposed method and the effectiveness of each component of the model.
## II Methodology
### _Correlation Features Construction_
In this section, we present the construction of sample similarity features based on pre-trained GBDT. GBDT [14][15] is an integrated model consisting of decision trees that learn in a gradient-boosting manner, where each base classifier (DT) is trained to fit the residuals of the prediction results of the preorder model. The GBDT structure is shown in "Fig. 1".
In a more general scenario, the sample correlation feature construction method can be expressed as follows: first train a GBDT model on the full training data \(D_{train}\) with the number of base classifiers \(M\) and the number of leaf nodes of each base classifier \(k\). For a sample \(\mathbf{x}\in\mathbb{R}^{d}\) with \(d\)-dimensional features, it is fed into the model to get the prediction result and its position in the leaf nodes is recorded and a \(k\times M\)-dimensional one-hot vector is obtained, i.e., \(\mathbf{x_{aug}}=(0,1,0,\cdots,0)\in\mathbb{R}^{k\times M}\), and finally the \(\mathbf{x}\) is concatenated with the original features \(\mathbf{x}\) to the augmented feature vector \(\tilde{\mathbf{x}}\in\mathbb{R}^{k\times M+d}\). In this way, the generated one-hot vector represents the position of the sample's leaf nodes in GBDT, i.e., the predicted path of the sample given by each base classifier. From the perspective of partitioning the feature space, the prediction path of each base classifier corresponds to the subregion in the feature space where the sample points are located in a certain view. The more the intersection of the prediction paths of two samples, the more they are in the same subregion in multiple views, i.e., they have a higher correlation. We can use this correlation to augment the sample features to train a deep neural network with more powerful representation ability.
### _Feature-aware Interaction Module_
FaFCNN improves the naive FCNN in two ways, respectively, by introducing FaIM to perform correlation-based mapping(i.e., feature interaction) on the sample correlation features \(\mathbf{x_{aug}}\), which results in a finer-grained intermediate representation about \(\mathbf{x_{aug}}\); and by introducing FAM to perform domain adversarial learning-based feature alignment operations on the original features \(\mathbf{x}\) and the intermediate representation of the sample correlation features \(\mathbf{x_{aug}}\).
In this section, we detail how FaIM obtains fine-grained intermediate representations about sample correlation features \(\mathbf{x_{aug}}\) by modeling feature interaction. As illustrated in
Fig. 1: The diagram of correlation features construction based on GBDT. The red circle represents the root node, the green circles represents the middle node and leaf node of the first base classifier, the blue circle represents the second base classifier, and the triangles represent the position of the sample to be predicted in the leaf node of the base classifier.
the purple box of "Fig. 2", considering a sample with 5-dimensional sample correlation feature \(\mathbf{x_{aug}}=(1,0,1,0,1)\), we initialize a \(p\)-dimensional vector \(\mathbf{h_{i}}\) for the \(i\) th dimension in \(\mathbf{x_{aug}}\) to obtain 5 \(p\)-dimensional vectors, each \(p\)-dimensional vector characterizes richer semantic information of each dimension of \(\mathbf{x_{aug}}\). Then the vector \(h_{i}\), where \(i\in\left\{i|x_{i}=1\right\}\) corresponding to the non-zero position is taken to perform the second-order interactions between features in an element-wise product manner. Attention Net, a sub-network with softmax activation function, calculates the weight \(a_{i,j}\) for each feature interaction term \(h_{i}\odot h_{j}\) (where \((i,j)\in\left\{m|x_{m}=1\right\}\)) in a self-attention manner, and finally uses this weight to aggregate these second-order interaction features to obtain the mapped sample correlation features. More generally, consider a sample of \(\mathbf{x_{aug}}\in\mathbb{R}^{k\times M}\), the \(p\)-dimensional vector \(\mathbf{h_{aug}}\) after being mapped can be obtained by the following formula:
\[\mathbf{h_{aug}}=\sum_{i=1}^{k\times M}w_{i}x_{i}+\sum_{i=1}^{k\times M}\sum_ {j=i+1}^{k\times M}a_{ij}\left(\mathbf{h_{i}}\odot\mathbf{h_{j}}\right)x_{i}x _{j} \tag{1}\]
where the weight \(a_{i,j}\) is calculated by the following formula:
\[\begin{split} a_{ij}^{\prime}&\mathbf{q}^{\mathrm{ T}}\operatorname{ReLU}\left(\varpi_{attn}\left(\mathbf{h_{i}}\odot\mathbf{h_{j}} \right)x_{i}x_{j}+b_{attn}\right),\\ &\qquad\qquad a_{ij}=\frac{\exp\left(a_{ij}^{\prime}\right)}{ \sum_{(i,j)\in x_{aug}}\exp\left(a_{ij}^{\prime}\right)}\end{split} \tag{2}\]
where \(\mathbf{q},\varpi_{attn},b_{attn}\) are the parameters of the sub-network, \(\mathcal{I_{\mathbf{x_{aug}}}}\) denotes the set of the index in \(\mathbf{x_{aug}}\).
Due to the huge number of two-by-two combinations between features, for example, for \(k\times M\)-dimensional features one needs to compute \(C_{2}^{k\times M}=(k\times M)\times(k\times M-1)/2\) feature interaction terms and their weights, from the perspective of enhancing the interpretability of the model and reducing the computation, we want most of the interaction terms to have a weight equals to zero. This not only highlights the combination of features with the greatest impact on the prediction but also greatly reduces the computation. Inspired by the addition of L1-norm-based regularization terms to the coefficients of the linear model in LASSO regression [16], FaFCNN adds L1-norm-based sparse regularization terms to the output \(a_{i,j}\) of Attention Net in the expectation of compressing the weights of unimportant feature combinations toward the value of zero and highlighting the important ones, the formula is as follows:
\[L_{sparse}=\sum_{i=1}^{k\times M}\sum_{j=i+1}^{k\times M}\left\|a_{ij}\right\| _{1} \tag{3}\]
### _Feature Alignment Module_
FaFCNN introduces adversarial-learning-based FAM to achieve a smoother feature fusion by aligning the distribution of mapped original features \(\mathbf{x}\) and the sample correlation features \(\mathbf{x_{aug}}\) in the high-dimensional representation space, which is shown in the orange box of Fig.2(a).
Similar to FCNN, a neural network with two hidden layers (DNN in Fig.2(a)) is first used to map the original features to obtain their representations in high-dimensional space \(\mathbf{h}\in\mathbb{R}^{p}\), the formula is as follows:
\[\mathbf{h}=f_{o,2}(\varpi_{o,2}\cdot f_{o,1}\left(\varpi_{o,1}\cdot\mathbf{x }+b_{o,1}\right)+b_{o,2}) \tag{4}\]
Where \(\varpi_{o,1},\varpi_{o,2},b_{o,1},b_{o,2}\) are the parameters of DNN. Since FaFCNN uses different mapping methods for different features (correlation-based aggregation for \(\mathbf{x_{aug}}\), and MLP-based nonlinear mapping for \(\mathbf{x}\)), which leads to a large difference in the distribution of the two parts of features in the high-dimensional representation. Therefore, FaFCNN introduces the FAM module to achieve the distribution alignment of \(\mathbf{h}\) and \(\mathbf{h_{aug}}\) in the representation space with the idea of Min-Max game in generative adversarial networks (GAN) [17].
FaFCNN introduces a discriminator \(D\) in FAM to distinguish signals from two partial features with \(\mathbf{h}\) and \(\mathbf{h_{aug}}\) as inputs, and the optimization objective is to enhance the discriminator's ability to distinguish \(\mathbf{h}\) and \(\mathbf{h_{aug}}\), i.e:
\[\theta^{\star}=\operatorname*{argmax}_{\theta}\left\|D(\mathbf{h};\theta)-D( \mathbf{h_{aug}};\theta)\right\|_{1} \tag{5}\]
which is equal to minimizing the following formula:
\[L_{D}=-\sum_{i=1}^{N}\left\|D(\mathbf{h_{i}};\theta)-D(\mathbf{h_{aug,i}}; \theta)\right\|_{1} \tag{6}\]
where \(\theta\) is the parameter of the \(D\), which is a two-layer MLP in FaFCNN. Naturally, we can consider the above-mentioned DNN that maps \(\mathbf{x}\) as the generator \(G\) in GAN, whose optimization goal is to make the distribution of the mapped \(\mathbf{h}\) in the high-dimensional representation space as similar as possible to \(\mathbf{h_{aug}}\), so that the discriminator \(D\) cannot distinguish \(\mathbf{h_{aug}}\) from \(\mathbf{h}\), the formula is as follows:
\[\phi^{\star}=\operatorname*{argmin}_{\phi}\left\|D(G(\mathbf{x};\phi);\theta)- \mathbf{1}\right\|_{1} \tag{7}\]
where \(\phi=\left\{\varpi_{o,1},\varpi_{o,2},b_{o,1},b_{o,2}\right\}\) is the parameter of the DNN, \(\theta^{\star}\) is the optimal parameters of the discriminator in the last iteration, \(\mathbf{1}\in\mathbb{R}^{p}\) is an all-one vector(proxy label of \(\mathbf{h_{aug}}\) in this domain adversarial learning procedure). This optimal \(\phi^{\star}\) can be found by minimizing the following loss function:
\[L_{G}=\sum_{i=1}^{N}\left\|D(G(\mathbf{x_{i}};\phi);\theta)-\mathbf{1}\right\|_ {1} \tag{8}\]
However, it is easy to experience pattern collapse during training with GAN, i.e., the generator generates very narrow distributions that cover only a single pattern in the data distribution. This was also observed in our experiments, where DNNs tend to consistently map samples with different original features \(\mathbf{x}\) to limited range in the high-dimensional space during adversarial learning, but since this is a pattern in the \(\mathbf{h_{aug}}\) distribution, the purpose of tricking the discriminator \(D\) can be reached. While this single-pattern representation does not bring any beneficial information to the model for classification. To ensure that the aligned \(\mathbf{h}\) maintains diversity during the adversarial learning, FaFCNN introduces a supervised signal to \(\mathbf{h}\) so that it retains a certain
amount of information that is beneficial to the classification of the sample thus ensuring that the aligned \(\mathbf{h}\) diversity of the distribution pattern, noting the label classifier as \(F(\cdot;\psi)\) and the auxiliary loss as follows:
\[L_{aux}=-\frac{1}{N}\sum_{i=1}^{N}( y_{i}\log F(\mathbf{h_{i}};\psi)+ \tag{9}\] \[(1-y_{i})\log\left(1-F(\mathbf{h_{i}};\psi)\right))\]
### _Optimization_
Based on the above, the training process of FaFCNN consists of two stages. In the first stage, we train the \(\mathbf{h_{aug}}\) obtained from FaIM to make it capable of classifying samples in a supervised manner, while adding a sparse regularization term to the total loss to ensure the sparsity of the weights of feature interaction terms obtained from Attention Net, the formula is as follows:
\[L_{y}=-\frac{1}{N}\sum_{i=1}^{N}( y_{i}\log F(\mathbf{h_{i,aug}};\psi)+ \tag{10}\] \[(1-y_{i})\log\left(1-F(\mathbf{h_{i,aug}};\psi)\right))\]
\[L_{1}=L_{y}+\alpha L_{sparse} \tag{11}\]
In the second stage, we first freeze the network parameters in the already trained FaIM module to ensure that \(\mathbf{h_{aug}}\) does not change during FAM training. Then alternately optimize the parameters \(\theta\) of the discriminator \(D\) with Equation 6, and the parameters \(\phi\) of the DNN with Equation (8) and (9), the formula is as follows:
\[L_{2}=L_{aux}+\beta L_{G} \tag{12}\]
## III Experiments
In this section, We validate the effectiveness and robustness of FaFCNN on four publicly available medical datasets with special perturbation treatments.
### _Experimental Setting._
#### Iii-A1 Dataset
To prove the superiority of the proposed method in medical diagnosis, we apply our model to four public medical datasets, including the Wisconsin Breast cancer, Pima Indians Diabetes, Hepatitis, Heart-Statlog datasets, more details of these datasets are listed as follows:
To simulate the challenge of existing a large number of missing values in a real scenario medical dataset, we add different levels of perturbation to the above dataset. Considering a raw dataset with \(N\) samples and \(d\) features, the data preprocessing process is as follows:
* First, the missing values in the dataset are processed. The columns with missing values are first identified and the median of the column other than the missing values is calculated and the missing values are replaced by the median.
* Then, the dataset is perturbed randomly. The data are first shuffled by rows, and then the rows of data to be
Fig. 2: The structural diagram of the proposed FaFCNN. (a) The overall framework of FaFCNN. The part in the orange box is FAM, the part in the purple box is FaIM. (b) is an explanation of those graphics that appear previously. (c)The forward calculation process in the Attention Net of the FaIM
perturbed are removed according to the selected \(\delta\). Each column of these data rows is randomly selected with equal probability (\(1/d\)) and perturbed in the same way as the missing values are processed.
* Finally, the data set is divided into a training set, validation set, and test set in the ratio of 8:1:1.
#### Iv-A2 Hyperparameter
In this section the hyperparameter settings used for training FaFCNN are described, the parameters of the GBDT included \(k=integer(d/2)\)(\(d\) is the number of features of the dataset) estimators, 8 max depths, and \(M=2\) min sample leaves. The stage 1 training of FaFCNN uses the Adam optimizer with a learning rate of 0.005 for \(T_{1}=10000\) epochs. The stage 2 training of DNN and Discriminator uses the SGD optimizer with a learning rate of 0.005 for \(T_{2}=10000\) epochs. The dimension \(p\) of vector \(\mathbf{h}\) is set as 8, and the balance coefficient \(\alpha\) and \(\beta\) are set to 0.05 and 0.5, respectively.
#### Iv-A3 Performance evaluation metrics
We select accuracy, sensitivity and Specificity, which are common evaluation metrics in classification tasks, to compare the performance of FaFCNN and baseline models in three dimensions. The three evaluation indicators are defined as follows:
\[Acc=\frac{\mathcal{M}_{tp}+\mathcal{M}_{tn}}{\mathcal{M}_{tp}+ \mathcal{M}_{fp}+\mathcal{M}_{tn}+\mathcal{M}_{fn}} \tag{13a}\] \[Sensitivity=\frac{\mathcal{M}_{tp}}{\mathcal{M}_{tp}+\mathcal{M}_ {fn}}\] (13b) \[Specificity=\frac{\mathcal{M}_{tn}}{\mathcal{M}_{tn}+\mathcal{M}_ {fp}} \tag{13c}\]
### _Classification results of different classification methods_
Our comparison is performed uniformly on four datasets with a perturbation ratio of \(\delta=0.5\); to ensure the fairness of the comparison, the structure of the DL-based baseline is adjusted so that the number of parameters of the models involved in the comparison remains the same. The experimental results are shown in Table II, and results in the table are the mean values of 10 independent repetitions of the experiment.
As shown in Table II, in the comparison of results from the Wisconsin Breast Cancer dataset, the DL-based methods (DNN, RFG-GCN) do not show better performance in some metrics than the ML-based methods (RF, LR); the FCNN better exploits the sample correlation in the training set to achieve a consistent performance improvement over the DNN. FaFCNN achieves smoother and more effective feature fusion while considering feature correlation and achieves significant improvement in two evaluation metrics compared to FCNN.
ML-based methods perform poorly on the Pima Indians Diabetes dataset, especially the sensitivity metric does not exceed 70% at the highest; DL-based methods achieve improvements in the other two metrics, but the sensitivity metric still can not exceed 80% (78.1% for FCNN). This indicates that there is a serious class imbalance problem on the Diabetes dataset and the model easily misclassifies some of the positive cases as negative cases. FaFCNN significantly outperforms FCNN with a p-value of 0.001 at 91.5% on the sensitivity metric while the other two metrics significantly outperform the optimal baseline with a p-value of 0.005.
On the Hepatitis dataset, the SRLPSO-ELM method achieves optimal performance in accuracy but this advantage was not significant (98.7%) and our FaFCNN has achieved (98.6%); FSTBSVM reaches 100% in specificity but does not significantly outperform FaFCNN (98.5%), and this advantage comes at the expense of sensitivity (78.6%), while FaFCNN achieves optimal performance in this metric (98.7%).
On the Heart-Statlog dataset, DL-based methods consistently outperform ML-based methods, and FaFCNN again achieves optimal performance on the three metrics with the same number of parameters and demonstrates significance in terms of accuracy and sensitivity due to the well-designed structure of the network. In summary, our FaFCNN is able to show robust and consistent optimal performance with respect to the baseline models on multiple datasets with 50% of the samples perturbed in a low-quality data setting and with class imbalance problems.
### _Robustness verification of classification results_
To verify that our proposed FaFCNN maintains robustness and acceptable performance in the face of scenarios with large amounts of missing data, we set a set of perturbation ratios \(\delta\in\{0.5,0.6,0.7,0.8,0.9\}\), 10 experiments are conducted on the Wisconsin Breast Cancer dataset for each \(\delta\), and the results of the experiments are shown in "Fig. 3".
As shown in "Fig. 3", we can conclude that as \(\delta\) gradually increases, meaning that the proportion of samples with missing values in the dataset increases, the fluctuation of FaFCNN's performance also gradually increases (the bandwidth on both sides of the performance line increases), but the three evaluation metrics still maintain a high level (the mean value of the worst case also remains above 0.9). Specifically, accuracy does not decrease significantly as \(\delta\) increases, and the mean value of each case remains above 0.93. Sensitivity and Precision show a decreasing trend (when \(\delta\) increases from
Fig. 3: Performance of FaFCNN on the Wisconsin Breast Cancer dataset with different settings of perturbation ratio \(\delta\). The red solid line, blue dashed line, and purple dotted line represent accuracy, sensitivity, and precision, respectively, and the points on the axes represent the mean values of 10 experiments, while the upper and lower bandwidths represent the standard deviation of the experimental results.
0.7 to 0.8) and show a performance increase when \(\delta\) improves to 0.9. In summary, FaFCNN has a narrow bandwidth on both sides of the line for different values of \(\delta\), which proves the robustness of the classification results in each setting; its classification performance does not show an obviously decreasing trend with the increase of \(\delta\), which proves that the model has strong robustness to noisy data.
### _Ablation Study_
In this section, we focus on validating the effectiveness of the well-designed components in FaFCNN by means of ablation experiments; to ensure fairness of the comparison, each variant of FaFCNN is extended in terms of network structure to ensure consistent overall model parameters. We conduct 10 independent repetition experiments on the Wisconsin Breast Cancer dataset under the setting of \(\delta=0.5\).
#### Iv-D1 Validity of FaIM&FAM
We first validate the effectiveness of the proposed modules and quantify the performance improvement brought by each module through a set of comparison experiments between FaFCNN and its three variants on the Wisconsin Breast Cancer dataset, as shown in "Fig. 4".
As shown in "Fig. 4", compared with the base model without sample correlation features, using the output of pre-trained RF as augmented features do not improve the performance of the model, but decreases the sensitivity of the model (-2.74%), which indicates that the introduction of augmented features without using a reasonable feature fusion method will harm the performance of the model. The introduction of FAM significantly improves the performance of the model, with improvements of 6.3%, 11.4%, and 2.4% for the three metrics respectively, which validates the effectiveness of our proposed feature fusion module based on adversarial learning. FaFCNN uses the FaIM module to replace the w/o FaIM variant's DNN for mapping sample correlation features and achieves further improvements in accuracy(+8.8%), sensitivity(+11.9%) and precision(+12.7%) metrics, which verifies that using predicted paths of samples of GBDT as augmented features can capture more accurate sample correlation than the RF-based approach, and the feature-interaction-based explicit mapping approach can achieve finer-grained feature representation than the DNN-based implicit feature mapping.
#### Iv-D2 Effectiveness of Sparse Regularization
To verify the effectiveness of the weight sparse regularization term added to the FaIM module, we train FaFCNN and FaFCNN without sparse regularization on the 50% perturbed Wisconsin Breast Cancer dataset, respectively, and record the output of the Attention Network, i.e., the weights of the feature interactions for each sample in the test phase, then average on them. "Fig. 5" shows the heatmap based on the mean value of 10 repetitions of the above procedure.
FaFCNN and FaFCNN-w/o sparse regularization perform consistently on 10 independent replicate experiments, that us
Fig. 4: Comparative results of FaFCNN and its three variants on Wisconsin Breast Cancer. Dark red means no sample correlation features are used, light orange means sample correlation features are added but FAM is not used, light blue means correlation between features is not modelled using FaIM, and dark blue means FaFCNN. The number above the bar indicates the relative improvement of adding different modules compared to the base model without introducing sample correlation.
Fig. 5: The heat map of average weights of feature interactions in FaIM, calculated in the test phase of 50% perturbed Wisconsin Breast Cancer dataset. The darker the color, the greater the absolute value of the weight.
ing sparse regularization with mean values of 97.9%, 97.9%, 95.8% and Without sparse regularization with mean values of 95%, 92%, 93.9% for the three metrics, respectively, and FaFCNN shows a significant improvement in accuracy and sensitivity relative to the variant without sparse regularization term. On the other hand, the above heat map shows that the two models capture similar feature association patterns, such as the larger values of \(a_{3,5}\), which implies that the feature interaction terms at these two locations have a greater impact on the model prediction thus the two features are more correlated. In addition, the sparse regularization term does work as expected by reducing the weights of relatively unimportant feature interactions while increasing the weights of critical feature interactions (Fig.5(b) has many more blank squares than Fig.5(a) but with darker colors at important positions), allowing the model to discover significant feature interaction patterns in the data, thus reducing the computation by using only the important feature combinations in the subsequent modelling process.
## IV Conclusions
In this work, by considering the advantages and disadvantages of existing methods, we propose the FaFCNN, a general framework for disease classification. On the one hand, FaFCNN improves the way existing methods obtain sample correlation features, exploiting augmented features obtained by pre-training gradient boosting decision trees to capture more accurate correlations between samples in the training set. On the other hand, FaFCNN introduces a feature alignment module for smoother and more efficient feature fusion, and the feature-aware interaction module considers feature correlation and model feature interaction in a more fine-grained manner to enhance the model's representation ability. Extensive experimental results show that FaFCNN has strong robustness and can achieve consistent optimal performance concerning the baseline models on multiple datasets with 50% of the samples perturbed in a low-quality data setting and with class imbalance problems.
**Acknowledgements** This study was supported by Natural Science Foundation of Hunan Province of China(grant number 2022JJ30673) and by the Graduate Innovation Project of Central South University (2023XQLH032, 2023ZZTS0304).
|
2305.00003 | Neural Network Accelerated Process Design of Polycrystalline
Microstructures | Computational experiments are exploited in finding a well-designed processing
path to optimize material structures for desired properties. This requires
understanding the interplay between the processing-(micro)structure-property
linkages using a multi-scale approach that connects the macro-scale (process
parameters) to meso (homogenized properties) and micro (crystallographic
texture) scales. Due to the nature of the problem's multi-scale modeling setup,
possible processing path choices could grow exponentially as the decision tree
becomes deeper, and the traditional simulators' speed reaches a critical
computational threshold. To lessen the computational burden for predicting
microstructural evolution under given loading conditions, we develop a neural
network (NN)-based method with physics-infused constraints. The NN aims to
learn the evolution of microstructures under each elementary process. Our
method is effective and robust in finding optimal processing paths. In this
study, our NN-based method is applied to maximize the homogenized stiffness of
a Copper microstructure, and it is found to be 686 times faster while achieving
0.053% error in the resulting homogenized stiffness compared to the traditional
finite element simulator on a 10-process experiment. | Junrong Lin, Mahmudul Hasan, Pinar Acar, Jose Blanchet, Vahid Tarokh | 2023-04-11T20:35:29Z | http://arxiv.org/abs/2305.00003v2 | # Neural Network Accelerated Process Design of Polycrystalline Microstructures
###### Abstract
Computational experiments are exploited in finding a well-designed processing path to optimize material structures for desired properties. This requires understanding the interplay between the processing-(micro)structure-property linkages using a multi-scale approach that connects the macro-scale (process parameters) to meso (homogenized properties) and micro (crystallographic texture) scales. Due to the nature of the problem's multi-scale modeling setup, possible processing path choices could grow exponentially as the decision tree becomes deeper, and the traditional simulators' speed reaches a critical computational threshold. To lessen the computational burden for predicting microstructural evolution under given loading conditions, we develop a neural network (NN)-based method with physics-infused constraints. The NN aims to learn the evolution of microstructures under each elementary process. Our method is effective and robust in finding optimal processing paths. In this study, our NN-based method is applied to maximize the homogenized stiffness of a Copper microstructure, and it is found to be 686 times faster while achieving 0.053% error in the resulting homogenized stiffness compared to the traditional finite element simulator on a 10-process experiment.
## 1 Introduction
The research on investigating process-structure-property relationships has become more prevalent with the introduction of the Integrated Computational Materials Engineering (ICME) paradigm [1], which aims to solve complex and multi-scale material design problems. As a result of the recent advancements following the introduction of ICME, several aspects of computational materials science and process engineering have significantly improved. For instance, new methodologies are developed for ICME to lower the costs and risks associated with the processing of new materials [2; 3]. Computational experiments are exploited to eliminate the traditional trial-error approach in bridging material features and properties. The investigation of the process-(micro)structure-property linkages requires a multi-scale approach that connects the macro-scale (process parameters) to meso (homogenized properties) and micro (crystallographic texture) scales. Physics-based multi-scale modeling of materials provides an opportunity to achieve the optimum design of materials with enhanced properties. However, their high computational cost prevents these multi-scale approaches from being widely adopted and used by industry in real material design efforts [4]. Nevertheless, the process-structure problem is studied less than the structure-property problem as the physics behind the process-structure problem is generally more complicated. Machine learning tools have demonstrated promise to address this gap by building low-cost process-structure-property surrogate models to replace computationally expensive physics-based models [5].
Control of polycrystalline microstructures is important in material design, processing, and quality control since the orientation-dependent material properties
(e.g., stiffness) could change as the underlying microstructures evolve during a deformation process. Therefore, identification of the optimal processing route to produce a material with desired texture and properties plays an important role in materials research. Traditional experimental approaches used to explore the optimum processing routes for a given texture are usually based on trial and error and, thus, can be tedious and expensive. Hence, computational methods are developed to replace these experiments to accelerate the design of microstructures with desired textures. Fast and accurate prediction of texture evolution during processing can significantly contribute to linking the current material design and manufacturing efforts for polycrystalline materials.
Current research in this field is focused on process-structure [6; 7; 8; 9; 10; 11] or structure-property [12; 13; 14; 15; 16; 17; 18] relationships separately. For example, Sarkar et al. [6] developed a surrogate model for \(ZrO_{2}\)-toughened \(Al_{2}O_{2}\) ceramics to predict the sinter density and grain size from sintering heat treatments process parameters. In another study, Tapia et al. [8] established a Gaussian process regression-based surrogate model for the heat treatment process of NiTi shape-memory alloys where the input parameters are heat treatment temperature and its duration, as well as initial nickel composition, and the output is the final nickel composition after heat treatment. Next, Acar and Sundararaghavan [12] developed a linear solver-based multi-scale approach using reduced-order modeling to design target microstructures that optimize homogenized material properties. In order to learn the reduced basis functions that can adequately represent the crystallographic texture of polycrystalline materials, Achargee et al. [13] and Ganapathysubramanian et al. [14] used the proper orthogonal decomposition (POD) and method of snapshots in Rodrigues orientation space [19] to cre
ate a continuum sensitivity-based optimization technique to compute the material properties that are sensitive to the microstructure. Similarly, Kalidindi et al. [15] designed the microstructure for a thin plate with a circular hole at the center to maximize the uniaxial load-carrying capacity of the plate without plastic deformation. There are few other studies available in the literature that explore the process-structure and structure-property relationships for additively manufactured materials [20; 21; 22]. Very recently, Dornheim et al. [23] developed a model-free deep reinforcement learning algorithm to optimize the processing paths (up to 100 combinations) for a targeted microstructure. Their algorithm does not require prior samples rather it can connect with the processing simulations during optimization. They also extended the method to solve multi-objective optimization problems. In another study, Honarmandi et al. [24] proposed a novel framework based on batch Bayesian optimization to solve the inverse problem to find the material processing specifications using microstructure data. Using both low-fidelity and high-fidelity phase field models, they developed a Gaussian process regression-based surrogate model to replace the computationally expensive process models and integrated it into inverse design optimization.
Recent studies [11; 25] model the texture evolution of polycrystalline materials using a probabilistic representation called orientation distribution function (ODF), which describes the volume density of crystals of different orientations in a microstructure. Predicting the changes of the microstructural texture after applying a particular deformation process (e.g. applying shear force along a particular direction for 1 second) involves solving the ODF conservation equation, which is a differential equation [25; 26]. The traditional physics-based solution is developed using the finite element method to solve the conservation equation
numerically [26]. The change in the ODFs during processing also controls the homogenized (meso-scale) mechanical properties of the microstructures. Significant challenges arise when we carry out a large-scale search task to find out microstructures with desired orientation-dependent properties. Under this context, we expect to optimize some homogenized mechanical properties by sequentially applying different deformation processes. The optimal processing path could be
Figure 1: Schematic of the contribution of this study. Data-driven surrogate model is developed to replace the physics-based simulator on the process-structure-property problem.
found by searching algorithms coupled with a process simulator of microstructure evolution. However, in practice, even a simple task (e.g. given an arbitrary initial texture and 5 different deformation processes and their combinations, finding out the exact optimal path of applying 10 processes sequentially) could be time-consuming to solve using traditional physics-based simulators due to both simulation time requirements and exponentially growing number of possible processing paths. Concurrent multi-process modeling can further increase the computational complexity of the material design. Simulation speed becomes the bottleneck when confronting these tasks. Therefore, this study develops a supervised learning approach together with a local search algorithm to explore and bridge the process-microstructure-property relationships of polycrystalline materials.
The neural network has already been shown to be a revolutionary function approximator in this era of abundant data which is already adopted in the literature [27; 28; 29; 30; 31] to study process-structure-property linkages in order to build low-cost surrogate models. Inspired by physics-informed neural networks (PINN) [32], we will develop a surrogate neural network trained on a small dataset to replace the physics-based simulator in this process design task. The overall contribution of this study is summarized in Fig. 1. The high-speed inference feature of a pre-trained neural network could accelerate the algorithm and therefore enable large-scale searching. Compared to the finite element (FE) simulator, this method is faster and has a little trade-off in prediction accuracy. The organization of this article is as follows: Section 2 and Section 3 discuss the mathematical formulation for modeling of deformation processing and polycrystalline microstructures, respectively. Section 4 and Section 5 describe the physics-based and data-driven modeling of the process-microstructure relationship, respectively.
The performance of the neural network-based surrogate model of process design for improved mechanical performance is reported in Section 6. Finally, Section 7 provides a summary of the paper and a discussion on potential future work.
## 2 Mathematical Modeling of Deformation Process with ODF Approach
Multiple crystals with various crystallographic orientations make up a polycrystalline material, and these orientations determine the microstructural texture, which is mathematically described by the orientation distribution function (ODF). During a deformation process, the texture of polycrystalline microstructure changes under applied loads. Deformation process modeling with the ODF approach is computationally efficient compared to the expensive finite element solver. The ODF, denoted by \(A(\mathbf{r},t)\), indicates the volume density of the crystals in the orientation space, \(\mathbf{r}\) in a certain time \(t\). ODFs are discretized over the Rodrigues orientation space using finite element techniques. The reduced region, called the fundamental region \(\Omega\), is induced based on the crystallographic symmetry of the polycrystal system from the initial Rodrigues orientation space. The Rodrigues angle-axis parameterization method is used to depict the various crystal orientations. In contrast to the Euler angles representation [33; 34], this method uses axis-angle representations to express crystal orientations. The Rodrigues parameterization is described by scaling of the axis of rotation, \(\mathbf{n}\), as \(\mathbf{r}=\mathbf{n}tan(\frac{\theta}{2})\), where \(\theta\) is an angle of crystal rotation. For further information on Rodrigues parameterization of microstructural solution spaces, interested readers are referred to Refs. [11; 26].
The ODF, \(A(\mathbf{r},t)\), could be used to compute homogenized elastic stiffness \(<C>\)
through its volume integration over the fundamental region, \(\Omega\):
\[<C>=\int_{\Omega}C(\mathbf{r})A(\mathbf{r},t)\;dV \tag{1}\]
where \(C(\mathbf{r})\) includes the single-crystal material properties required to compute the homogenized elastic stiffness values given by \(<C>\). By controlling the ODF values, desired homogenized (meso-scale) properties can be obtained as described in Eq. (1). However, the ODF (\(A(\mathbf{r},t)\geq 0\)) must satisfy the volume normalization constraint which is expressed as follows:
\[\int_{\Omega}\;A(\mathbf{r},t)\;dV=1 \tag{2}\]
The homogenized (meso-scale, volume-averaged) properties of the microstructures are obtained using the given expression in Eq. (1). Here, the integration for the homogenized properties is performed over the fundamental region by considering the lattice rotation, \(\mathbf{R}\). Given the Rodrigues orientation parameter, \(\mathbf{r}\), the rotation, \(\mathbf{R}\), can be obtained with the following expression:
\[\mathbf{R}=\frac{1}{1+\mathbf{r}\cdot\mathbf{r}}(I(1-\mathbf{r}\cdot\mathbf{r })+2(\mathbf{r}\otimes\mathbf{r}+I\times\mathbf{r})) \tag{3}\]
The finite element discretization of the microstructural orientation space is exhibited in Fig. 2. Here, each independent nodal point of the finite element mesh represents a unique ODF value for the associated crystallographic orientation. For \(N\) independent nodes with \(N_{elem}\) in the finite element discretization (\(N_{int}\) integration points per element), Eq. (1) can be approximated as follows at a given time,
\[\begin{split}<C>&=\int_{\Omega}C(\mathbf{r})A(\mathbf{r},t )\,dV\\ &=\sum_{n=1}^{N_{elem}}\sum_{m=1}^{N_{int}}C(\mathbf{r_{m}})A(r_{m}) \omega_{m}|J_{n}|\frac{1}{(1+r_{m}.r_{m})^{2}}\end{split} \tag{4}\]
where \(A(r_{m})\) is the ODF value at the \(m^{th}\) integration point with global coordinate \(r_{m}\) (orientation vector) of the \(n^{th}\) element. \(|J_{n}|\) is the Jacobian determinant of the \(n^{th}\) element and \(\omega_{m}\) is the integration weight of the \(m^{th}\) integration point.
Considering the crystallographic symmetry, the homogenized property of Eq. (4) can alternatively be approximated in the linear form as \(<C>\)\(=\)\(P^{T}\mathbf{A}\), where \(P\) is the property matrix that is a product of the single-crystal material properties
Figure 2: Finite element discretization of the orientation space for face-centered cubic (FCC) microstructures. The red-colored nodal points show the independent ODF values while the blue-colored nodes indicate the dependent ODFs as a result of the crystallographic symmetries.
and finite element discretization of the orientation space, and \(\mathbf{A}\) is the column vector of the ODF values for the independent nodes (this work uses 76 independent nodes to model the FCC microstructure) of the finite element mesh (see Fig. 2). Moreover, the ODF should satisfy the normalization constraint (see Eq. 2). The normalization constraint is mathematically equivalent to the fact that the sum of the probabilities for having all possible crystallographic orientations in a microstructure must be one.
During the deformation process, ODFs change due to the reorientation of the grains. It evolves from the initial ODFs (t=0) to the final deformed ODFs (t=t). The evolution of the ODF values is governed by the ODF conservation equation, which satisfies the volume normalization constraint of Eq. 2. Equation (5) shows the Eulerian rate form of the conservation equation in the crystallographic orientation space with a gradient operator given by [25]:
\[\frac{\partial A(\mathbf{r},t)}{\partial t}+\nabla A(\mathbf{r},t)\cdot v( \mathbf{r},t)+A(\mathbf{r},t)\nabla\cdot v(\mathbf{r},t)=0 \tag{5}\]
where \(v(\mathbf{r},t)\) is the reorientation velocity.
The texture evolution can be calculated by the microstructure constitutive model in terms of a velocity gradient (\(\mathbf{L}\)) definition (see Eq. 6 below), which is linked to \(v(\mathbf{r},t)\) by the Taylor macro-micro linking hypothesis. The Taylor hypothesis assumes that the crystal velocity gradient is equal to the macro velocity gradient [25]. To compute the reorientation velocity, a rate-independent constitutive model is adopted. Resulting \(A(\mathbf{r},t)\) (current texture) which evolves from \(A(\mathbf{r},0)\) (initial texture) is solved by utilizing the constitutive model and finite element representation of the Rodrigues orientation space.
Each deformation process (e.g., tension/compression and shear) yields a par
ticular ODF as output after applying the load for a certain time. The macro velocity gradient, \(\mathbf{L}\), for a particular process is provided as input in the crystal plasticity solver to investigate the ODF evolution during that process. While designing a process sequence to obtain the desired texture, the macro velocity gradient describing the processing route (type and sequence of the process and strain rate) is solved as a design variable. The velocity gradient of a crystal with the orientation, \(\mathbf{r}\), can be expressed as
\[\mathbf{L}=S+\mathbf{R}\sum_{\alpha}\dot{\gamma^{\alpha}}\mathbf{\bar{T}^{ \alpha}}\mathbf{R}^{T}, \tag{6}\]
where \(S\) represents the lattice spin, \(\mathbf{R}\) indicates the lattice rotation, \(\dot{\gamma^{\alpha}}\) and \(\mathbf{\bar{T}^{\alpha}}\) indicate the shearing rate and Schmid tensor for the slip system \(\alpha\), respectively. Here, the crystal velocity gradient is defined in terms of the time rate change of the deformation gradient. Equation (6) demonstrates that the velocity gradient is decomposed into two components. The first component, lattice spin \(S\), is related to the elastic deformation gradient by assuming that the deformation gradient (\(\mathbf{F}\)) is decomposed into elastic (\(\mathbf{F}^{e}\)) and plastic (\(\mathbf{F}^{p}\)) parts such that: \(\mathbf{F}=\mathbf{F}^{e}\mathbf{F}^{p}\). In particular, the lattice spin is equal to \(\mathbf{\dot{R}^{e}}\mathbf{R^{e}}\), where \(\mathbf{R^{e}}\) is evaluated through the polar decomposition of the elastic deformation gradient: \(\mathbf{F}^{e}=\mathbf{R^{e}}\mathbf{U}^{e}\), where \(\mathbf{U}^{e}\) is the unitary matrix of the polar decomposition [35]. The second component in Eq. (6) shows the rotated plastic velocity gradient (\(\mathbf{R}\mathbf{L}^{p}\mathbf{R}^{T}\)). Here, \(\mathbf{L}^{p}\) denotes the plastic velocity gradient, which is related to the combined shearing of the slip systems given by \(\sum_{\alpha}\dot{\gamma^{\alpha}}\mathbf{\bar{T}^{\alpha}}\).
The macro velocity gradient expression of Eq. (6) for different deformation processes can also be written in the following matrix form, given in Eq. (7). The detailed derivation from Eq. (6) to Eq. (7) is skipped here for brevity and it can be
found in Ref. [25].
\[\mathbf{L}=\alpha_{1}\begin{bmatrix}1&0&0\\ 0&-0.5&0\\ 0&0&-0.5\end{bmatrix}+\alpha_{2}\begin{bmatrix}0&0&0\\ 0&1&0\\ 0&0&-1\end{bmatrix}+\alpha_{3}\begin{bmatrix}0&1&0\\ 1&0&0\\ 0&0&0\end{bmatrix}\\ +\alpha_{4}\begin{bmatrix}0&0&1\\ 0&0&0\\ 1&0&0\end{bmatrix}+\alpha_{5}\begin{bmatrix}0&0&0\\ 0&0&1\\ 0&1&0\end{bmatrix} \tag{7}\]
Each matrix in Eq. (7) defines a deformation process, e.g. tension/compression (\(\alpha_{1}\)), plane strain compression (\(\alpha_{2}\)), and shear modes (\(\alpha_{3},\alpha_{4},\alpha_{5}\)).
We define a processing path of \(n\) steps with a fixed deformation duration time \(\Delta t\) in an autoregressive pattern below. \(\phi_{F_{i}}(A)\) stands for a deformation process with load \(F_{i}\) (e.g., tension/compression and shear modes defined through the velocity gradient, \(\mathbf{L}\), in the finite element simulations) for a duration \(\Delta t\) on a polycrystalline microstructure described by the ODF, \(A(r,i\Delta t)\), at time \(i\Delta t\):
\[\begin{split} A(\mathbf{r},\Delta t)&=\phi_{F_{0}}(A(\mathbf{ r},0))\\ A(\mathbf{r},2\Delta t)&=\phi_{F_{1}}(A(\mathbf{r},\Delta t))\\ &\vdots\\ A(\mathbf{r},n\Delta t)&=\phi_{F_{n-1}}(A(\mathbf{r},(n-1)\Delta t)) \end{split} \tag{8}\]
Here \(F_{i}\in\mathcal{F}\) is picked from a pre-specified set \(\mathcal{F}\coloneqq\{f_{1},\ldots,f_{k}\}\) (e.g. a set of shear force \(\{xy,xz,yz\}\)) of deformations we can apply in each step. An optimal processing path \(P^{*}\) for an initial ODF \(A(\mathbf{r},0)\) is a sequence of deformation processes that could achieve the best ODF set \(A^{*}(\mathbf{r},n\Delta t)\) at time \(n\Delta t\). \(A(\mathbf{r},n\Delta t)\) is evaluated by the desired orientation-dependent properties which is an integral
of \(A\) and single-crystal material property matrix (weight function) \(C(\mathbf{r})\) over the fundamental region \(\Omega\) (as shown in Eq. (1)).
\[\begin{split}\max_{A}\,F=\sum_{i}^{6}\omega_{ii}<C_{ii}>+\,\sum_{i <j}^{6}\omega_{ij}<C_{ij}>\\ \text{subject to}\int_{\Omega}A(\mathbf{r},i\Delta t)dV=1\\ A(\mathbf{r},i\Delta t)\geq 0\end{split} \tag{9}\]
where the composite objective function (\(F\)) includes the summation of the constants of the \(6\times 6\) anisotropic homogenized elastic stiffness matrix (\(<C_{ii}>\), \(<C_{ij}>\)) that are computed using the formulation given in Eq. (1). In this problem, the diagonal entries of the homogenized stiffness matrix (\(<C_{ii}>\)) are assumed to be more important, and thus multiplied by a weight factor of \(\omega_{ii}=1\), while the cross-diagonal terms (\(<C_{ij}>\)) are multiplied by a weight factor of \(\omega_{ij}=\frac{1}{2}\). Note that the symmetric terms (\(<C_{ij}>\) and \(<C_{ji}>\)) are only counted once (through the \(i<j\) condition in the second term of the objective function) in the summation formula. The underlying idea behind this composite objective function is to improve the homogenized elastic stiffness of the microstructure in different directions rather than improving the stiffness along a particular direction.
To address this task, we need 1) an efficient deformation simulator \(\phi_{F}(A)\) and 2) a path-searching algorithm. For the algorithm, various works have been done on such local search problems. These algorithms work by making a sequence of decisions locally to optimize the objective function, with well-known methods including simulated annealing. In this work, we focus on a novel approach to building up \(\phi_{F}(A)\) by a neural network. As a function approximator, a neural network could learn the behavior of different deformations accurately from data and predict fastly to accelerate process design. It is chosen to address the challenge
arising from the numerically intractable nature of the processing design problem and the speed bottleneck of a traditional simulator.
## 3 End-to-end deformation prediction with Neural Networks
We aim to develop an efficient neural network to replace the FE predictor in computing deformation results. In this data-driven method, we no longer focus on solving the conservative equation presented in Section 2 but exploit some physical constraints to build up a surrogate neural network. The neural network aims to approximate the deformation process \(\phi_{F}\):
\[A(\mathbf{r},\Delta t)=\phi_{F}(A(\mathbf{r},0)),\ \ \mathbf{r}\in\Omega \tag{10}\]
where \(A(\mathbf{r},0)\) denotes the ODFs before process and \(A(\mathbf{r},\Delta t)\) is the ODFs after applying a deformation \(F\) of duration \(\Delta t\).
We define \(NN_{F}(A;\theta)\) to be the surrogate neural network model with \(\theta\) parameters. In this work, we employ multilayer perceptron (MLP) to approximate \(\phi_{F}(A)\). For an MLP with \(L\) hidden layers, we have:
\[\begin{split} a^{(0)}&=x\\ z^{(i+1)}&=M^{(i)}a^{(i)}+b^{(i)}\\ a^{(i+1)}&=\varphi^{(i+1)}(a^{(i)})=\xi^{(i+1)}(z^ {(i+1)})\\ NN_{F}&=\varphi^{(L)}\circ\cdots\circ\varphi^{(1)} \end{split} \tag{11}\]
where \(x\) is the model input, \(\xi^{i+1}\) denotes the activation function of layer \(i\). Considering that the ODF stands for the probability density of the orientation space, the ODF non-negativity and volume normalization constraints must be satisfied. A ReLU followed by a normalization layer which divides each channel with a
material-specific weighted sum between volume fraction constant and output from the previous layer are applied to keep the network outputs physically feasible.
The model parameters can be learned by minimizing the error between exact ODFs and network predictions. One common choice is using squared error to match them. Here we introduce the specific objective value weights to assign different importance to different elastic stiffness constants. Considering that the integral of the homogenized properties in Eq. (1) over a discrete fundamental is well approximated the weighted sum (Eq. (4)), the weighted mean squared error (WMSE) should be a more appropriate choice:
\[WMSE=\frac{1}{B}\frac{\sum_{i=1}^{n}w_{i}\|y_{i}-y_{i}^{*}\|_{2}^{2}}{\sum_{i= 1}^{n}w_{i}} \tag{12}\]
where y and y* stand for true and predicted ODF values, respectively, \(w_{i}\) are the orientation weights to calculate the desired objective material property, and \(B\) is batch size.
In this section, we evaluate our method on a stiffness optimization task. We find a minimal difference in predicted ODFs and objective property value, compared with the traditional FE method. In addition, the neural network predicts hundreds of times faster than traditional methods which enables large-scale searching.
### Model Setup
We consider a stiffness optimization task of finding out the best texture evolution path by applying 31 different possible deformation processing modes (tension, compression, and shear along xy, xz, and yz axes, respectively, and their combinations, except the case when none of them are selected). The fundamental region is discrete with 76 possible outcomes (the FCC microstructure is modeled
with 76 independent ODF values). Only one of the deformation processes can be applied in one step. Each step is 0.1 seconds in duration. An evolution path consists of 10 sub-processes executed in order. The elastic stiffness is selected as the objective mechanical property to be maximized while the material of interest is Copper (Cu). The following single crystal properties are taken for Copper (Cu): \(C_{11}=C_{22}=C_{33}=168\) GPa, \(C_{12}=C_{21}=C_{13}=C_{23}=C_{31}=C_{32}=121.4\) GPa, and \(C_{44}=C_{55}=C_{66}=75.4\) GPa [25]. The objective function is defined as the sum of the homogenized elastic stiffness constants (\(C\)) using a composite function definition with weights of 1 for diagonal entries (\(C_{ii}\)) and weights of 0.5 for off-diagonal entries (\(C_{ij}\), \(i\neq j\)) of the elastic stiffness matrix. The symmetric off-diagonal (\(C_{ij}\), \(i\neq j\)) entries are only considered once. The synthetic dataset of size 5000 is uniformly initialized and normalized to keep the ODFs feasible according to the unit volume normalization constraint (Eq. (2)). The deformation mode results of each initial ODF are generated by the FE solver. In the following experiment, data points used for training and testing are selected randomly. The dataset is also randomly divided by an 80%/20% train-test ratio.
### Searching Algorithm and network structure
Due to the complex nature of the combinational optimization problem, calculating all the \(31^{10}\) results is impractical even with fast NN. Instead, we employ a heuristic algorithm that helps to find out the optimal process path by sampling from a multinominal distribution constructed from the predicted value at each stage.
The neural network has 1 hidden layer of 760 neurons with hyperbolic tangent activation functions except the last ReLU. ReLU and normalization are applied to fit the physical constraint in the last layer. The network is trained with a mini-batch of size 128. ADAM with warm restarts [36] is used for network training. When searching, restart 1000 times with weight base \(\beta=5\) for each initial texture.
### Network performance and searching samples
Figure 3 depicts the comparison between the textures obtained from the neural network prediction and FE simulator at different time steps of processing. The loading conditions are as follows: for Fig. 3 (a) and (d) shear force in the YZ
plane, for Fig. 3 (b) and (e) a combined force of tension, compression, and shear in XZ and YZ planes, and for Fig. 3 (c) and (f) a combined force of tension, compression, and shear in the YZ plane. The performance of the neural network is also evident from Fig. 4 and Table 1, which presents the relative average errors of independent ODFs over the fundamental region for different deformation modes.
Figure 3: The comparison between the predicted textures by the neural network model (a, b and c) and finite element crystal plasticity model (d, e and f) at time steps t=0.3 sec (a and d), t=0.8 sec (b and e), and t=1 sec (c and f).
The computation time for predicting all 31 deformation modes' results from an arbitrary set of initial ODF values in the neural network is 0.2213s while the traditional finite element simulator takes 152s on average when running in a 31 process-based parallelism. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 10000 and 100000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 10000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 10000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 10000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 10000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 10000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 10000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 1000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 1000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 1000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 1000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 1000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 1000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 1000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 1000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 1000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 1000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 1000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 1000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 1000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 1000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 1000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 1000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 1000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 1000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 1000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 1000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 1000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 1000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 1000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 1000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 1000 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 100 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 100 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 100 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 100 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 100 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 100 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 100 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 100 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 100 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 100 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 100 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 100 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 100 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 100 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 100 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 100 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 100 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 100 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 100 training data. The neural network gives predictions 686.85 times faster. The predictions of the neural network were generated on NVIDIA(R) with 10 training data.
Tesla P100-16G with single-core Intel(R) Xeon(R)@2.3Ghz and simulator run on AMD(R) EPYC C2D-highcpu-32@2.45 Ghz. The average difference in the elastic stiffness prediction values of the physics-based model and neural network after 10 processes is 0.4663 GPa.
The neural network-based surrogate model replaced the physics-based finite
Figure 5: Texture evolution prediction through the optimum processing path by the neural network surrogate model. The figure shows the different steps of deformation processing from an initial texture to a final optimum texture which maximizes an objective function defined for the homogenized elastic stiffness constants of a Copper microstructure. T and C stand for tension and compression, respectively, and XY, XZ & YZ represent the corresponding shear processes.
element model to find the optimum processing paths. It is already mentioned that an optimization problem is developed with the goal of maximizing the sum of the elastic stiffness constants of Copper. The solution of the optimization problem provides a maximum objective function value with the corresponding texture. It also suggests an optimum processing route to obtain the optimum texture. We
Figure 6: Texture evolution through the optimum processing path by the physics-based simulator. The figure shows the different steps of deformation processing from an initial texture to a final optimum texture which maximizes an objective function defined for the homogenized elastic stiffness constants of a Copper microstructure. T and C stand for tension and compression, respectively, and XY, XZ & YZ represent the corresponding shear processes.
determine the processing route for five different initial textures. Though the optimum solutions vary with the initial textures, almost equal objective function values are obtained for all cases. The average of the sum of the maximum stiffness constants of Copper is found to be 885.2 GPa from the neural network surrogate model whereas this value is 884.9 GPa for the FE-based process model.
The neural network predicted the optimum processing path with texture evolution after each step to obtain the optimum texture at the end starting from a certain initial texture, as shown in Fig. 5. Similarly, the optimum process route and texture evolution obtained from the physics-based model for another initial texture is displayed in Fig. 6. Here, T and C stand for tension and compression, respectively, and XY, XZ & YZ represent the corresponding shear processes. The strain rate is constant for all steps (i.e., 1 \(s^{-1}\)). The processing is allowed to have
Figure 7: A single crystal optimum texture is obtained using linear programming to maximize the homogenized elastic stiffness constants without considering processing.
only one loading condition or combined loading conditions as reported in Fig. 5 and Fig. 6. Both models are found to provide polycrystalline textures as optimum solutions.
If we solve the same optimization problem using linear programming without considering the processing (only structure-property problem), a single crystal texture is found as an optimum ODF solution (see Fig. 7). In this case, the maximum objective function value is 896.2 GPa, which indicates the theoretical maximum value of the objective function. On the other hand, the neural network surrogate model, which accounts for processing (process-structure-property problem), provides an optimum texture with 885.2 GPa for the objective function value (which is a higher value compared to the randomly oriented texture providing an objective function value of 878.4997 GPa). However, the theoretical solution of the structure-property problem results in a single crystal solution while the solutions are polycrystalline textures for the process-structure-property problems using both the neural network and physics-based model. Even though single crystal textures provide the theoretically possible maximum value for elastic stiffness constants, their manufacturing is difficult. On the other hand, the polycrystalline textures of the process-structure-property problem can easily be manufactured with the presented simple deformation processing modes. Therefore, considering the effects of the processing is significant for manufacturing and bridging materials design and manufacturing [37].
This work can be a valuable addition to the literature to investigate the process-structure-property linkages of polycrystalline microstructures. The neural network surrogate model can be a substitute for the physics-based simulator as it is computationally faster and accurate. Moreover, the processing-produced opti
mum textures predicted by the surrogate model can still improve the mechanical properties (e.g. elastic stiffness) near to the maximum theoretical values.
## 4 Conclusions
In this work, we developed a surrogate neural network model to accelerate the process design of polycrystalline microstructures. The traditional physics-based FE simulator was found inefficient to address the large-scale searching task. With the representation potential of the neural network, a pre-trained network could predict the ODF after deformation significantly fast and accurately. An example design problem was solved to find the optimum processing route maximizing an objective function defined for the homogenized elastic stiffness constants of Copper. The results demonstrate a good match between the predictions of the physics-based simulator and neural network surrogate model. Studies on accumulative error analysis of neural network are reserved in the future. Future work may also involve the integration of a similar data-driven modeling strategy for the concurrent multi-scale modeling of metallic components.
## 5 Acknowledgements
Material in this paper is based upon work supported in part by the Air Force Office of Scientific Research under award number FA9550-20-1-0397. MH and PA also acknowledge the support from the Air Force Office of Scientific Research Young Investigator Program under grant FA9550-21-1-0120 and from the National Science Foundation under award number 2053840.
## 6 Data availability
The raw/processed data required to reproduce these findings cannot be shared at this time as the data also forms part of an ongoing study. The data will be made available upon request.
|
2305.00995 | Towards a Phenomenological Understanding of Neural Networks: Data | A theory of neural networks (NNs) built upon collective variables would
provide scientists with the tools to better understand the learning process at
every stage. In this work, we introduce two such variables, the entropy and the
trace of the empirical neural tangent kernel (NTK) built on the training data
passed to the model. We empirically analyze the NN performance in the context
of these variables and find that there exists correlation between the starting
entropy, the trace of the NTK, and the generalization of the model computed
after training is complete. This framework is then applied to the problem of
optimal data selection for the training of NNs. To this end, random network
distillation (RND) is used as a means of selecting training data which is then
compared with random selection of data. It is shown that not only does RND
select data-sets capable of outperforming random selection, but that the
collective variables associated with the RND data-sets are larger than those of
the randomly selected sets. The results of this investigation provide a stable
ground from which the selection of data for NN training can be driven by this
phenomenological framework. | Samuel Tovey, Sven Krippendorf, Konstantin Nikolaou, Christian Holm | 2023-05-01T18:00:01Z | http://arxiv.org/abs/2305.00995v1 | # Towards a Phenomenological Understanding of Neural Networks: Data
###### Abstract
A theory of neural networks (NNs) built upon collective variables would provide scientists with the tools to better understand the learning process at every stage. In this work, we introduce two such variables, the entropy and the trace of the empirical neural tangent kernel (NTK) built on the training data passed to the model. We empirically analyze the NN performance in the context of these variables and find that there exists correlation between the starting entropy, the trace of the NTK, and the generalization of the model computed after training is complete. This framework is then applied to the problem of optimal data selection for the training of NNs. To this end, random network distillation (RND) is used as a means of selecting training data which is then compared with random selection of data. It is shown that not only does RND select data-sets capable of outperforming random selection, but that the collective variables associated with the RND data-sets are larger than those of the randomly selected sets. The results of this investigation provide a stable ground from which the selection of data for NN training can be driven by this phenomenological framework.
_Keywords_: Neural Tangent Kernel, Data-Centric AI, Random Network Distillation, Statistical Physics of Neural Networks, Learning Theory
## 1 Introduction
Neural Networks (NNs) are a powerful tool for tackling an ever-growing list of data-driven challenges. Training NNs is a problem of model fitting over a parameter space so large (in some cases infinite Rasmussen and Williams (2005)) that in their finite width regimes they are powerful feature learning devices and in their infinite regimes, regression-driven universal approximators of functions Hornik et al. (1989). These methods have experienced terrific success in both day to day technology including speech recognition, tailored advertising, and medicine as well as many scientific fields. Whilst theoretical methods have been making steady headway into understanding the processes underlying machine learning, what is still absent is a simple, physically inspired, phenomenological framework to understand NN training. That is, a model that describes the learning process independent of microscopic variables that go into the training and deployment, ideally motivated by well studied physical principles. These variables include model complexity defined by the number of layers, layer width, and propagation algorithm used by the neural network, the data-set used to train the model such as the size of the set used or its coverage of the problem space, and finally the algorithms used to minimize the chosen loss function and train the NN such as the optimizer or even the loss function itself. In this work, NN performance is analyzed in terms of the initial state of the empirical neural tangent kernel (NTK) (see 2.1). Use of the NTK arises here naturally as it holds crucial information on training dynamics including both the NN and the data on which it is trained. With this approach, we are interested in a universally calculable set of variables from the NTK which can be used to analyse NN behaviour across data-sets and architectures. As the spectrum of the NTK has been observed several times to be sparse (i.e dominated by a single eigenvalue and smaller ones, the vast majority of which are 0), it seems feasible to compress the information in the NTK down to a few collective variables, in this case, the trace of the NTK and the entropy computed from its eigenvalues. Such a framework should allow us to optimise the training process. To do this, we identify how these variables are related with training performance (e.g. generalisation error) and then use this information to optimise NN training. In particular, here this framework is applied to the problem of data-selection for the training of NNs. Namely, random network distillation (RND) is examined as a method that constructs data-sets for which the collective variables are larger than that of a randomly selected set, resulting in improved generalization. Novel observations include correlation between the starting entropy and trace of the NTK of a NN with model performance as well as insight into why RND is so performant. The results presented here provide a clear path for future investigations into the construction of a phenomenological theory for machine learning training built upon foundations in physically motivated collective variables.
### Related Work
Research into the NTK has exploded in the last decade. As such, several groups have made promising steps in directions somewhat aligned with the work presented here. Kernel methods as applied to NNs were introduced as far back as the 1990s when initial results were found on the relationship between infinite width NNs and Gaussian processes Neal (1995) (GPs). Since then, focus has shifted towards an alternative kernel representation of NNs, namely, the NTK. The theory developed in this paper is built upon the empirical NTK, that is, the NTK matrix computed for a finite size NN on a fixed data-set. Work on the NTK first appeared in the 2018 paper by Jacot et al. (2018) where it was introduced as a means of understanding the dynamics of an NN during training as well as to better characterize their limits. Since then, the NTK has been used as a launching pad for a large number of investigations into the evolution of NNs. In the direction of eigenspectrum analysis, Gur-Ari et al. (2018) describes the splitting of the eigenvectors of the Hessian during training and how this affects gradient descent. Their research shows that the gradients converge to a small subspace spanned by a set of eigenvectors of the Hessian, the dimension of which is determined by the problem complexity, e.g, the number of classes in a data-set. In their 2021 paper, Ortiz-Jimenez et al. (2021) extend the work of Ortiz-Jimenez et al. (2020) by further discussing the concept of neural anisotropy directions (NADs) and how they can be used to explain what makes training data optimal. They find that NNs, linearized or not, sort complexity in a similar way using the NADs. Further, they draw upon foundations in kernel theory, specifically that the complexity of a learning problem is bound by the kernel norm chosen for the task. This reduces to stating that the goal of a learning model is to fit the eigenfunctions of the kernel. When applying this to NNs, they discovered that NNs struggle to learn on eigenfunctions with small associated eigenvalues. Whilst these studies have aimed to characterize the NTK in its static form, some prior work has been done on understanding the evolution of this kernel during training. In their 2022 paper, Krippendorf and Spannowsky (2022) demonstrated a duality between cosmological expansion and the evolution of the NTK trace throughout training. Mathematically, this involved re-writing NN evolution as a function of the eigenvalues of the NTK, a formulation drawn upon in
this work to highlight the role of our collective variables. Of the above examples, all are directly related to NNs. However, most NN theory finds its foundations in kernel theories as they have been thoroughly studied and allow for exact solutions. It has been long-established in the kernel regression community that the use of maximum entropy kernels can provide fitting models with the best base from which to fit. The concept of these kernels was first described by Tsuda and Noble (2004) wherein they demonstrate that the diffusion kernel is built by maximising the von-Neumann entropy of a data-set.
This work aims to extend the previous studies presented here to finite-size NNs with a focus on the data-selection process. The remainder of the paper is structured as follows. In the next section, the theoretical background required to understand the collective variables is developed. Following this, two experiments are introduced and their results discussed. The first of these experiment involves understanding the role of the collective variables in model training. The second looks to using these collective variables to explain why RND data selection outperforms randomly selected data-sets. Finally, an outlook of the framework is presented and future work discussed.
## 2 Preliminaries
Throughout this work, several important concepts related to information theory, machine learning, and physics are relied upon. In this section, each of these concepts is introduced and explained such that the use of our collective variables is motivated.
### Neural Tangent Kernel
The NTK came to prominence in 2018 when papers began to arise demonstrating analytic results for randomly initialized, over-parametrized, dense NNs Jacot et al. (2018), Lee et al. (2020). This research resulted in the demonstration that in the infinite width limit, NNs evolved as linear operators and provided a mathematical insight into the training dynamics. This has since been extended so that it is applicable to most NN architectures and work is currently underway towards better understanding the learning mechanism in this regime Yang (2019). Given a data-set \(\mathcal{D}\), the NTK, denoted \(\Theta\), is the Gramian matrix Horn and Johnson (1990) formed by the inner product
\[\Theta_{ij}=\sum_{k}\frac{\partial}{\partial\theta_{k}}f(x_{i},\{\theta\}) \cdot\frac{\partial}{\partial\theta_{k}}f(x_{j},\{\theta\})\, \tag{1}\]
where \(\Theta_{ij}\) is a single entry in the NTK matrix, \(f\) is an NN with a single output dimension, \(x_{i}\in\mathcal{D}\) is a data point, and \(\{\theta\}\) are the parameters of the network \(f\). Individual entries in the NTK matrix provide information about how the representation of a point \(x_{i}\) will evolve with respect to another \(x_{j}\) under a change of parameters, that is, it is an inner product between the gradient vectors formed by the NN representations of data in the training set with respect to the current state of the network. The role of the NTK matrix in NN training is best described in the infinite-width limit by the continuous time update equation for the representation of a single datapoint by an NN, which, under gradient descent, may be written
\[\dot{f}(x_{i})=-\sum_{x_{j}}\Theta_{i,j}\frac{\partial\mathcal{L}(f(x_{i},\{ \theta\}),y_{i})}{\partial f_{x_{j},\{\theta\}}}\, \tag{2}\]
where \(\mathcal{L}\) is the loss function and \(y_{i}\) is the label for the \(i^{\text{th}}\) element of the training data-set corresponding to input point \(x_{i}\)Lee et al. (2020).
### NTK Spectrum
In the infinite-width regime, the NTK built on a data-set is constant throughout training and therefore, may be used as an operator to step through model updates Jacot et al. (2018). However, in finite network regimes, such as those widely deployed in science and industry, this is not the case and the NTK will evolve as the model trains. In this work, the state of the NTK before training becomes a measurement device for understanding how a model will generalize. Therefore, it is crucial to have the correct tools with which to discuss and quantify this matrix. The tools used in this investigation trace their roots to random matrix theory, information theory, and physics, beginning with entropy. The Shannon entropy, \(S^{Sh}\), describes the amount of information contained within a random variable Shannon (1948) and can be written
\[S^{Sh}=-\sum_{i}p(x_{i})\ln p_{i}(x_{i}), \tag{3}\]
where \(p(x_{i})\) is the probability of random variable \(x_{i}\) being realised, and \(\ln\) denotes a logarithm. In his original work, and as is still common in information theory today, a log of base 2 was chosen due to the limited domain of binary numbers. For the purpose of this work, we use the natural logarithm as the variables take on continuous values. Von Neumann entropy arose in the field of quantum mechanics upon the introduction of the density matrix as a tool to study composite systems Neumann (1927). The von Neumann entropy of a random matrix, \(S^{VN}\), can be formulated similarly to the Shannon entropy as
\[S^{VN}=-tr(\rho\ln\rho), \tag{4}\]
where \(\rho\) is a matrix with unit trace. However, it is often more convenient to compute the entropy in terms of the eigenvalues of \(\rho\), denoted \(\lambda_{i}\), as
\[S^{VN}=-\sum_{i}\lambda_{i}\ln\lambda_{i},\]
where it can be seen as a proper extension to the Shannon entropy for random matrices.
In the context of covariance matrices in statistics or density matrices in quantum mechanics, the von Neumann entropy provides a measure of correlation between states of a system Demarie (2018), Tsuda and Noble (2004).
In this work, the NTK matrix acts as a kernel matrix comparing the similarity of the gradients between points in the training data-set. The impact of these gradients on model updates are apparent when examining the work of Krippendorf and Spannowsky (2022) where the continuous time evolution of an NN was derived as a function of the normalized eigenvalues of the NTK matrix as
\[\tilde{f}(\mathcal{D})=\mathrm{diag}(\lambda_{1},\ldots,\lambda_{N})\mathcal{ L}^{\prime}(\mathcal{D}), \tag{6}\]
where \(\tilde{f}\) is the NN under a basis transformation. Equation 6 frames NN model updates in such a way that the von Neumann entropy could become a useful tool in understanding the quality of a data-set. Namely, a higher von Neumann entropy would suggest a more diverse update step and therefore, perhaps a more well-trained model. This entropy is the first of the collective variables used through this work to predict model performance. It should be noted that the NTK matrix does not demand unit trace, therefore, in the entropy calculation, the eigenvalues are scaled by their sum. Furthermore, Equation 6 highlights the role of the eigenvalue magnitudes which will act as a scaling factor, forcing larger update steps along specified directions. We use this scaling as our second collective variable built from the NTK, in particular, we use its trace,
\[Tr(\Theta)=\sum\lambda_{i}\approx\lambda_{max}, \tag{7}\]
which turns out empirically to be well approximated by its largest eigenvalue. We note that changes in both of these variables throughout training measures the deviation from a constant NTK as was partially studied in Krippendorf and Spannowsky (2022).
### Random Network Distillation for Data Selection
RND first appeared in 2018 in a paper by Burda et al. (2018) wherein the method was introduced as an approach for environment exploration in deep reinforcement learning problems. The concept arises from the idea that the stochastic nature of a randomly initialized NN will act to sufficiently separate unique points from a data pool in their high-dimensional representation space. With this approach, it appears that an RND architecture can resolve unique points in a sample of data such that a minimal data-set can be constructed for NN training. The goal of this application is similar in nature to that of coreset approaches Feldman (2020) albeit using the model itself to provide information on uniqueness of training data in an unsupervised manner. Figure 1 outlines graphically the process by which RND filters points from a data pool into a target set. The method works by randomly initializing two NNs, referred to here as the _target network_\(\mathcal{F}:\mathcal{R}^{\mathcal{N}}\rightarrow\mathcal{R}^{\mathcal{M}}\) and the _predictor network_\(\mathcal{G}:\mathcal{R}^{\mathcal{N}}\rightarrow\mathcal{R}^{\mathcal{M}}\), which in this study are of identical architecture. During the data selection, the target network will remain untrained while the predictor network will be iteratively re-trained to learn the representations produced by the target network. Theoretically, this should mean that the error between the predictor network and the target network will provide a measure of whether a point has already been observed. To understand this process better, we formulate it more mathematically and discuss the steps involved individually. Consider a set \(\mathcal{P}\) consisting of points \(p_{i}\) such that \(i\in\mathcal{I}\) indexes \(\mathcal{P}\)1. Now consider a theoretical target set \(\mathcal{T}\subset\mathcal{P}\) consisting of points \(t_{i}\) such that each point is maximally separated from all the others within some tolerance \(\delta\). During each re-training run, the network \(\mathcal{G}\) is trained on the elements of \(\mathcal{T}\) and target values \(\mathcal{F}(t_{i}\in\mathcal{T})\). In this way, the predictor network will effectively remember the points in \(\mathcal{T}\) that it has already seen and therefore, distinguish points from \(\mathcal{P}\) that do not resemble those already in
Figure 1: Workflow of RND. A data point, \(p_{i}\), is passed into the target network, \(\mathcal{F}\) and the predictor network \(\mathcal{G}\), in order to construct the representations \(\mathcal{F}(p_{i})\) and \(\mathcal{G}(p_{i})\). A distance, \(d\) is then computed using the metric \(\mathcal{D}(\mathcal{F}(p_{i}),\mathcal{G}(p_{i}))\). If \(d>\delta\), the point, \(p_{i}\), will be added to the target set \(\mathcal{T}\) and the predictor model re-trained on the full set \(\mathcal{T}\). If the \(d\leq\delta\), it is assumed that a similar point already exists in \(\mathcal{T}\) and is therefore discarded. In our notation, \(\left\langle\mathcal{T},\mathcal{F}(\mathcal{T})\right\rangle\) denotes the function set with domain \(\mathcal{T}\) and image \(\mathcal{F}(\mathcal{T})\).
\(\mathcal{T}\). In the case of RND for data selection, the size of \(\mathcal{T}\) is set to be \(S\) and points are selected for the target set in a greedy fashion, that is, the distance between target and predictor is computed on all data points in the point cloud and the one with the largest distance is chosen. RND for data selection is outlined algorithmically in Algorithm 1. During this study, the mean square difference between representations was used as a distance metric.
As a general note, RND is a highly involved means of data-selection and whilst the method can be applied to the construction of data-sets consisting of hundreds of points, beyond this will require approximation and further algorithmic improvement. This optimization is the subject of further research and therefore, in this paper, we construct smaller data-sets in order to better understand how they impact training.
### ZnNL
All algorithms and workflows used in this study have been written into a Python Package called ZnNL1. ZnNL provides a framework for performing RND in a flexible manner on any data as well as analyzing the selected data using the collective variables discussed in this work. NTK computations are handled by the neural-tangents library Novak et al. (2020, 2022), Hron et al. (2020), Sohl-Dickstein et al. (2020), Han et al. (2022) with some additional neural network training handled by Flax Heek et al. (2020). ZnNL is built on top of the Jax framework Bradbury et al. (2018) and is currently compatible with Jax-based models.
Footnote 1: ZnNL can be found at [https://github.com/zincware/ZnNL](https://github.com/zincware/ZnNL)
```
Input: data pool \(\mathcal{P}\), target size \(S\) while\(|\mathcal{T}|\leq S\)do \(D=\{d_{i}:d_{i}=\mathcal{D}(\mathcal{F}(p_{i}),G(p_{i}))\ \forall\ i\in\mathcal{P}\}\) \(p_{\text{chosen}}=\{p_{i}:d_{i}\in D=\max(D)\}\) \(\mathcal{T}=\mathcal{T}\cup p_{\text{chosen}}\) Re-train \(\mathcal{G}\) on \(\left\langle\mathcal{T},\mathcal{F}\left(\mathcal{T}\right)\right\rangle\) endwhile
```
**Algorithm 1** Data Selection with RND
## 3 Experiments and Results
In order to test the efficacy of the collective variables two experiments have been performed. The first investigates the correlation between the collective variables and model performance. In the second, RND is demonstrated to outperform randomly selected data-sets before our collective variables are used to provide an explanation for this performance.
### Investigated Data
To ensure a comprehensive study, several data-sets spanning both classification and regression ML tasks have been selected for the experiments. To further demonstrate realistic use cases of RND as a training-set generation tool, two of the problems have been chosen for their overall scarcity of data, making the construction of a small, representative data-set of the utmost importance. Table 1 describes each of the chosen data-sets including information about the ML task (classification or regression) as well as the amount of data available and the amount used in the test sets.
### Entropy, Trace, and Model Performance
In the first experiment, the correlation between our collective variables and model performance is examined. To do so, NNs were trained with a constant architecture but varying initialization and training data for the MNIST and Fuel data-sets. In addition to changing the data-set, a dense and convolutional model architecture was tested for the MNIST classification. Details of the experiment are summarised in Table 2. In each experiment, a data-set size was randomly generated and the NN parameters randomly initialized to sample the entropy and trace space. The trace and entropy of the NTK were then computed at the beginning of the training process, i.e. before the first back-propagation step. The discussion to follow pertains to models initialized using a standard LeCun procedure LeCun et al. (2012). The same study has been performed for NTK initialized Novak et al. (2020) models and is presented in Appendix A.2. Figure 2 presents the outcome of this experiment with the collective variables plotted against the minimum test loss as well as each other. In the first row, colour corresponds to the minimum test loss achieved during training. In the remaining rows, the colour represents the size of the data-set used in the training with darker colours being smaller data-sets.
In the first two rows, one can see the plots of the NTK trace vs the starting entropy of the matrix. The first of these plots is coloured by the minimum test loss achieved by the model and the second row shows the data-set size. What we see here for the dense models is the formation of a loss surface wherein both the entropy and trace contribute to the performance achieved during training. In these cases, it appears as though a combination of entropy and trace is required in order to achieve maximal performance in model training. In the case of the convolutional network, this trend is not as clear. It appears that, whilst an increasing trace will aid in model performance, entropy does not show such a clear trend.
Analysing the plots of the trace against the
minimum test loss during model training, an interesting similarity appears between the different data-sets and architectures. Namely, the formation of a hull like shape showing decreasing test loss with increasing starting trace. The results suggest that a larger starting NTK trace yields models with better generalization capacity, as demonstrated by their lower test loss. It is notable that this trend occurs across different data-sets, architectures, and initializations. Secondary to the simple relationship, there also appears to be a constraint effect present. Namely, whilst at lower trace values the models can achieve low test loss, they in general take on a larger range of values, whereas at larger traces the spread of the values becomes slim.
Turning our attention to the entropy plots, the relationship becomes less clear. In the dense models, a similar trend can be identified with the larger entropy data-sets resulting in lower test loss. However, the mechanism by which this occurs differs. For MNIST, the larger entropy appears to fit linearly with a lower test loss, whereas in the case of the fuel data-set, this relationship is present but slightly weaker. What is present in both is the existence of the constrain mechanism discussed in the trace vs entropy plots. It appears that data-sets with larger starting entropy, no matter their size, will take on a smaller range of minimum test losses after training. In the case of the convolutional models, this trend is non-existent, suggesting, at least for the tested architecture, that starting entropy is not an indicator of model performance. It is important to note that the starting entropy and trace values will depend on the problem and chosen architecture. For the purpose of this study, architecture has been fixed and therefore, the effects of these parameters is not studied and is left to future work.
In all plots there appears some degree of banding in data-set size. Of note however is the mixing present in these bands as smaller data-sets with higher values of the collective variables achieve test losses akin to those in the larger data-sets. This mixing is evidence that it is the collective variables themselves and not simply data-set size that are responsible for the results.
Beyond the plots, the Pearson correlation coefficients between several variables have also been computed the correlation matrices presented in Figure 3. These matrices have been constructed with additional metrics in order to present the correlation between our collective variables and other properties of the model. The trends discussed in the plot can be seen here numerically to correspond to our conclusions. Relationships between the collective variables and training losses are also displayed in these matrices. In these cases, it appears larger values of the collective variables results in larger train losses during training. An explanation could be that larger entropy and trace values would correspond to fitting over more modes in a data-set and therefore, high training losses with lower test losses.
These results highlight the correlation between the collective variables and model performance for standard machine learning training on different data-set sizes. In order to extend the investigation of this model, it is important to understand how entropy changes on fixed data-set sizes can impact performance. To this end, the efficacy of data-selection methods has been studied using these collective variables.
\begin{table}
\begin{tabular}{l l l l l l} \hline Data-Set & Available Data & Test Data & Problem Type & Features & Source \\ \hline MNIST & 10000 & 500 & Classification & 28x28x1 & Lecun et al. (1998) \\ Fuel Efficiency & 398 & 120 & Regression & 8 & Quinlan (1993) \\ Gait Data & 48 & 10 & Classification & 328 & Gümüsçu (2019) \\ Concrete Data & 103 & 10 & Regression & 10 & Yeh (2007) \\ \hline \end{tabular}
\end{table}
Table 1: Table outlining the problems chosen for the experiments. In the case of MNIST, 10000 of the 60000 total data points were selected at random before the experiments took place.
\begin{table}
\begin{tabular}{l c c c c} \hline Data-set Name & Architecture & \# Samples & Max Accuracy & Min Test Loss \\ \hline Fuel Dense & \(\left(\mathcal{D}^{128},\mathcal{D}^{128},\mathcal{D}^{1}\right)\) & 7075 & N/A & 0.051 \\ MNIST Dense & \(\left(\mathcal{D}^{128},\mathcal{D}^{128},\mathcal{D}^{10}\right)\) & 5480 & 95.000 & 0.015 \\ MNIST Conv. & \(\left(\mathcal{C}^{32}_{2\times 2},\mathcal{AP}^{4\times 4}_{2\times 2}, \mathcal{C}^{64}_{2\times 2},\mathcal{AP}^{4\times 4}_{2\times 2},\mathcal{D}^{1 28},\mathcal{D}^{10}\right)\) & 3082 & 99.000 & 0.008 \\ \hline \end{tabular}
\end{table}
Table 2: Parameters used in the study of entropy and NTK trace with respect to model training. Network architecture nomenclature is defined in Table 1. ReLU activation has been used between hidden layers and an ADAM optimizer in the training.
### Random Network Distillation
With the results thus-far suggesting a relationship between entropy, NTK trace, and model performance,
Figure 3: Correlation matrix of several variables in model training. Colours correspond to the numbers in the boxes.
Figure 2: Figures describing the relationship between the entropy, NTK trace, and minimum test loss. Colours in the first row correspond to the minimum test loss achieved during training where a darker colour corresponds to a smaller loss. Colours in the remaining rows correspond to the size of the data-set used in the NN training with darker colour corresponding to smaller data-sets.
the remainder of the experiments will pertain to testing and interpreting the performance of RND as a means of selecting data on which to train.
During the RND investigations, an ensemble approach is taken in all experiments wherein the test is performed 20 times and averages of the results taken in order to construct meaningful statistics. In this way, the stochastic initialization of the networks and the variation in data-sets due to random selection are accounted for. In all plots, standard error, i.e \(\epsilon=\sigma/\sqrt{N}\) where \(\sigma\) is the standard deviation of the samples and \(N\) is the number of samples, is shown in the error bars.
In the first part of the experiment, the efficacy of RND is assessed by constructing data-sets of different sizes and comparing the minimum and final test losses with data-sets constructed using random selection. Figure 4 presents the results of this experiment. Examining first the minimum test loss, it can be seen that data-sets generated using RND consistently outperform those constructed using random data selection. This is true for all data-set sizes, problems, and ML tasks, suggesting RND is a suitable tool for optimal data-set construction. The final test loss plot in Figure 4 was also compared to identify any effect on over-fitting in the models. The results of this comparison, show that RND selected data-sets not only provide better minimum loss but also appear less susceptible to over-fitting.
These results suggest that RND is capable of producing a data-set that spans the problem domain in a minimal number of points, resulting in a low minimum loss. Furthermore, the marked reduction in over-fitting in the RND data-sets indicates that the data used in the training covered a more diverse region of the problem space, avoiding similar elements.
In the next part of the experiment, the starting von Neumann entropy and trace of the NTK matrix is computed for each data-set size and a comparison between randomly selected sets and RND selected sets is investigated. In Figure 5, the results of this investigation are presented as plots of the starting entropy and trace vs the data-set size for the different problem sets. These plots clearly show that in each case, RND selected data-sets have a higher starting entropy and/or NTK trace than those selected randomly.
To understand how these variables have an impact on training, the NTK matrix must be examined more closely. The elements of the NTK matrix describe the similarity of the gradient vectors formed by the representation of points of a data-set in the embedding space of an NN with respect to the current parameter state. Consider the NTK formed by two points selected from a data-set. If the gradient vectors computed for these points align, the inner-product will be large suggesting that they will evolve in a similar way under a parameter update of the network. In this case, the entropy of the NTK will be low as only one of the two points is required to explain this evolution. In the case where these points are almost perpendicular to one another, their inner product will be small and their entropy high as the NTK matrix takes on the form of a kernel matrix dominated by its on-diagonal entries. This will mean that during a parameter update, both points will contribute in different ways to the learning. These conclusions can be further explained with the work of Krippendorf and Spannowsky (2022) wherein a model update is written in the form of Equation 6. In this form, one can see that the update step along a specific eigenmode in the data will be scaled by the magnitude of the associated eigenvalue \(\lambda\). Therefore, a larger eigenvalue will result in a larger gradient step along this mode and ideally, better training. Such a result recommends that the trace should be maximised in order to focus on dominant eigenmodes and better train the model. However, entropy maximisation would be equivalent to increasing the number of dominant eigenmodes within the system, thereby redistributing the eigenvalues. In this way, a balance between the number of dominant modes in the system, represented by the entropy, and the scale factor of each mode, represented by the trace, should be achieved for ideal model performance.
Here it has been shown that RND selected data-sets typically produce data-sets where one or both of the trace and entropy of a data-set with respect to an NN architecture is larger than an analogous data-set chosen randomly. Whilst it seems that there is a correlation between these variables and the model performance, it is not clear thus-far how best to disentangle their individual roles in the model updates and further work is needed to explore this. Furthermore, work here has not touched upon the role of architecture in the scaling of the collective variables. This remains the topic of future investigations.
## 4 Conclusion
This work has examined the performance of finite width NNs by studying the spectrum and entropy of the associated NTK matrix computed on training data. It has been shown that there exists correlation between the starting entropy and trace of the NTK matrix and model generalisation seen after training as measured through the test loss. These collective variables enable us to quantify the effect of different data selection methods on test performance. Our results support previous work performed into understanding how modes of data-sets are learned by models, namely the relationship between larger eigenvalues and better
training. This framework has been applied to the understanding of RND as a data-selection method. The efficacy of RND has been shown on several data-sets spanning regression and classification tasks on different architectures. In order to explain this performance, it was shown that RND selects data-sets that have larger starting entropy and/or NTK trace than those selected randomly. This work acts as a step towards the construction of a general, phenomenological theory of machine learning training in terms of the collective variables of entropy and NTK trace. Future work will revolve around further disentangling the role of entropy and trace in other aspects of NN training including architecture and optimizer construction as well as better understanding their evolution during training. With the ever-growing complexity of NNs, a framework built upon physically motivated collective variables offers a rare explainable insight into the inner-workings of these complex models. The work presented here is a first step in building a deeper understanding of this framework and perhaps, will act as a platform for the construction of a comprehensive theory.
## Acknowledgements
C.H and S.T acknowledge financial support from the German Funding Agency (Deutsche Forschungsgemeinschaft DFG) under Germany's Excellence Strategy EXC 2075-390740016, and S. T was supported by a LGF stipend of the state of Baden-Wurttemberg. C.H, and S.T acknowledge financial support from the German Funding Agency (Deutsche Forschungsgemeinschaft DFG) under the Priority Program SPP 2363.
|
2310.11353 | Hybrid quantum-classical graph neural networks for tumor classification
in digital pathology | Advances in classical machine learning and single-cell technologies have
paved the way to understand interactions between disease cells and tumor
microenvironments to accelerate therapeutic discovery. However, challenges in
these machine learning methods and NP-hard problems in spatial Biology create
an opportunity for quantum computing algorithms. We create a hybrid
quantum-classical graph neural network (GNN) that combines GNN with a
Variational Quantum Classifier (VQC) for classifying binary sub-tasks in breast
cancer subtyping. We explore two variants of the same, the first with fixed
pretrained GNN parameters and the second with end-to-end training of GNN+VQC.
The results demonstrate that the hybrid quantum neural network (QNN) is at par
with the state-of-the-art classical graph neural networks (GNN) in terms of
weighted precision, recall and F1-score. We also show that by means of
amplitude encoding, we can compress information in logarithmic number of qubits
and attain better performance than using classical compression (which leads to
information loss while keeping the number of qubits required constant in both
regimes). Finally, we show that end-to-end training enables to improve over
fixed GNN parameters and also slightly improves over vanilla GNN with same
number of dimensions. | Anupama Ray, Dhiraj Madan, Srushti Patil, Maria Anna Rapsomaniki, Pushpak Pati | 2023-10-17T15:40:26Z | http://arxiv.org/abs/2310.11353v1 | # Hybrid Quantum-Classical Graph Neural Networks for Tumor Classification in Digital Pathology
###### Abstract
Advances in classical machine learning and single-cell technologies have paved the way to understand interactions between disease cells and tumor microenvironments to accelerate therapeutic discovery. However, challenges in these machine learning methods and NP-hard problems in spatial Biology create an opportunity for quantum computing algorithms. We create a hybrid quantum-classical graph neural network (GNN) that combines GNN with a Variational Quantum Classifier (VQC) for classifying binary sub-tasks in breast cancer subtyping. We explore two variants of the same, the first with fixed pretrained GNN parameters and the second with end-to-end training of GNN+VQC. The results demonstrate that the hybrid quantum neural network (QNN) is at par with the state-of-the-art classical graph neural networks (GNN) in terms of weighted precision, recall and F1-score. We also show that by means of amplitude encoding, we can compress information in logarithmic number of qubits and attain better performance than using classical compression (which leads to information loss while keeping the number of qubits required constant in both regimes). Finally, we show that end-to-end training enables to improve over fixed GNN parameters and also slightly improves over vanilla GNN with same number of dimensions.
Anupama Ray\({}^{1}\) Dhiraj Madan\({}^{1}\) Srushti Patil\({}^{3}\) Maria Anna Rapsomaniki\({}^{2}\) Pushpak Pati\({}^{2}\)\({}^{1}\) IBM Quantum IBM Research India, \({}^{2}\)IBM Research Zurich
\({}^{3}\)Indian Institute of Science Education and Research Tirupati, India Quantum Machine Learning, Quantum Neural Networks, hierarchical Graph Neural Networks, spatial tissue modeling, histopathological image classification
## 1 Introduction
Understanding how tumor cells self-organize and interact within the tumor microenvironment (TME) is a long standing question in cancer Biology, with the potential to lead to more informed patient stratification and precise treatment suggestions. From Hematoxylin & Eosin (H&E) staining to multiplexed imaging and spatial omics, a plethora of technologies are used to interrogate the spatial heterogeneity of tumors [6]. For example, H&E histopathology images have long been used to train Convolutional Neural Networks (CNNs) in a patch-wise manner for a variety of tasks [1, 10]. More recently, geometric deep learning and in particular Graph Neural Networks (GNNs) have found promising applications in histopathology [5, 12]. Indeed, a graph representation is a natural modeling choice for TME as it is a flexible data structure to comprehensively encode the tissue composition in terms of biologically meaningful entities, such as cells, tissues, and their interactions. In a typical cell-graph representation, cells represent nodes, edges represent cell-to-cell interactions and cell-specific information can be included as node feature vectors. As a result, GNNs can elegantly integrate cellular information with tumor morphology, topology, and interactions among cells and/or tissue structures [7]. Yet, the complexity of tumor graphs and the entangled cell neighborhoods lead to sub-optimal embedding spaces of GNNs, which in turn struggle with learning clinically meaningful patterns from the data. At the same time, searching for relatively small query subgraphs over large, complex graphs is \(NP\)-hard. Although GNNs are currently being used as state-of-art networks for learning such problems from images, two severe limitations of GNNs are over-smoothing [2] and over-squashing [15]. Over-smoothing refers to the indistinguishable representations of nodes in different classes and over-squashing refers to the inefficient message passing in a longer chain of nodes in a graph. These challenges in classical GNNs provide opportunities for quantum algorithms. The main impact expected from quantum is the possibility of extending the embedding space by mapping data to the exponentially large qubit Hilbert space, which can potentially help in capturing hidden spatio-temporal correlations at the cellular and tissue level.
In this paper we create a hybrid classical-quantum network which combines a GNN with a Variational Quantum Classifier (VQC). We train this network with two approaches: (i) a serial approach, i.e., by first training the classical model and then the quantum model after the classical model has converged, and (ii) an end-to-end approach, by back-propagating loss from quantum neural network to all the layers of the classical neural network. In the first approach, we pretrain a classical graph neural network on the tissue graphs and then use the learnt representation from the GNN as input to a VQC. Since we are taking the output of the final layer of the classical GNN, we could map it with different dimensions via a linear layer. We performed ablation studies with 10-, 64-, 256-, 512- and 1024-dimensional learned GNN embeddings. For
the 10-dimensional GNN output, wherein the learnt embedding has been compressed classically, we use second-order Pauli encoding (ZZ encoding), which needs as many qubits as the number of dimensions (thus 10 qubit circuits). For all other dimensional embeddings, we use amplitude encoding to be able to fit all the information in size logarithmic in embedding dimension (thus number of qubits needed is \(\log(n)\) for \(n\)-dimensional output of GNN). A key observation of this paper is that although amplitude encoding compresses the number of qubits significantly, it does not lead to information loss, suggesting that the quantum model could be as close to state-of-art classical model. However, the quantum models with ZZ encoding are unable to learn much due to lossy compression via classical network. In the second end-to-end approach, we experiment with 10-dimensional data with ZZ encoding. We observe that not only does end-to-end training of GNN+VQC significantly improve over serial, but it even slightly outperforms classical GNN with 10-dimensional final layer.
## 2 Related Work and Background
### Quantum Computing and Quantum Machine Learning
Quantum Computing is a model of computation which enables one to perform efficient computation based on the laws of quantum mechanics. Here, the fundamental building blocks constitute qubits and gates. A single qubit \(\left|\psi\right\rangle\) can be mathematically expressed as a unit vector in a 2-dimensional Hilbert space as \(\left|\psi\right\rangle=\alpha\left|0\right\rangle+\beta\left|1\right\rangle\), where \(\left|\alpha\right|^{2}+\left|\beta\right|^{2}=1\). Here \(\left|0\right\rangle\) and \(\left|1\right\rangle\) are the orthonormal basis states corresponding to classical bits 0 and 1. Similarly, an \(n\)-qubit state can be expressed as a unit vector in \(2^{n}\) dimensional space \(\left|\psi\right\rangle=\sum_{x\in\left\{0,1\right\}^{n}}\alpha_{x}\left|x\right\rangle\). A measurement of an \(n\)-qubit state yields one of the classical bit strings \(x\) with probability \(\left|\alpha_{x}\right|^{2}\). A quantum circuit starts from an initial state \(\left|0^{n}\right\rangle\) and performs a sequence of single and 2 qubit operations such as H, S, T, X, Y, Z, CNOT to yield a final state \(\left|\psi\right\rangle\). The above gate set also includes parameterized gates, such as \(R_{x}(\theta),R_{y}(\theta)\) and \(R_{z}(\theta)\). The produced final state can be measured to yield an output from the desired distribution corresponding to the problem ([11]).
Quantum circuits can be parameterized by learnable parameters and can also be trained to optimize a given objective function. In the context of machine learning, these are known as Variational Quantum Classifiers or VQC [8, 3], which define the objective function based on the cross-entropy loss between the sampled distribution and ground truth data for classification. Here the state is produced by first running a unitary parameterized by the input on initial state (feature map) followed by a unitary parameterized with trainable weights. Overall, we have the state \(\left|\psi(x,\theta)\right\rangle=V_{\theta}U_{\phi(x)}\left|0\right\rangle\). Some common feature maps include for example the Pauli feature map [4] and amplitude encoding [14]. The Pauli feature map maps an input \(x\) to a quantum state \(U_{\phi(x)}\left|0^{n}\right\rangle\), where \(U_{\phi(x)}=exp(i\sum_{S\in\mathcal{I}}\phi_{S}(x)\prod_{i\in S}P_{i})\). Here, \(\mathcal{I}\) in a collection of Pauli strings and \(S\) runs over the set of indices corresponding to qubits where Paulis are applied. Here \(\phi_{S}(x)=\left\{\begin{array}{ll}x_{i}&S=i\\ \prod_{j\in S}(\pi-x_{j})&\text{if }\left|S\right|>1\end{array}\right\}\).
A special case of the same is given by the ZZ Feature map. Multiple repetitions of Pauli and ZZ Feature maps can be stacked as well. Another common feature map is amplitude encoding, which encodes a vector \(x\in\mathbb{R}^{n}\) as \(\sum_{i}\frac{x_{i}}{\left\|x\right\|}\left|i\right\rangle\). This takes \(\log(n)\) qubits whereas ZZ encoding requires \(n\) qubits. One can measure the state to obtain samples from the model distribution by measuring an observable \(O\) on the state \(p(y|x;\theta)=\left\langle\psi(x,\theta)|O|\psi(x,\theta)\right\rangle\). One can take the observable to be \(ZZ\).\(ZZ\), which corresponds to measuring parity \(\in\left\{+1,-1\right\}\). The cost function can be optimized using classical routines, e.g., COBYLA, SPSA, Adam, NFT.
### Classical Neural Networks for Spatial Tissue Modeling
HACT-NET [12] is a state-of-the-art Graph Neural Network model for the hierachical analysis of digital pathology tissue images. Typically the tissue images are of large dimensions, e.g., 5000 \(\times\) 5000 pixels at 40\(\times\) magnification (0.46 \(\mu\)m/pixel). To process such images by a CNN while utilizing the complete TME context is infeasible due to the high computational overload. Therefore, a graph representation is useful to encode the necessary TME information in terms of a thousands of nodes and edges, and is much lighter than a pixel-based image representation. Building on this concept, HACT-NET constructs a hierarchical graph representation of a tissue by incorporating a low-level cell-graph, a high-level tissue-graph, and a cell-to-tissue hierarchy to comprehensively represent the tissue composition. Afterwards, the hierachical GNN backbone of HACT-NET processes the graph representation in a two-step manner to produce a cell- and tissue-aware feature embedding. A Multi-Layer Perceptron (MLP) operates on this embedding to perform downstream tissue subtyping. In this work, we pre-train the HACT-NET model for various downstream tissue classification tasks and use the pre-trained model to extract tissue embeddings for subsequently training our VQC.
## 3 Methodology
In our approach, we define a hybrid classical-quantum graph neural network, an overview of which is shown in Figure 1.
Specifically, we use a HACT-NET [12] to produce embeddings as \(Embed(x;\theta_{G})=GNN(x;\theta_{G})\in\mathbb{R}^{d}\) corresponding to the input image \(x\). These embeddings are then passed as input to a VQC which applies a feature map followed by
an ansatz \(V_{\theta_{Q}}\) and produces samples from the distribution
\[p(y|x;\theta_{G},\theta_{Q})=\left\langle\psi(x;\theta_{G},\theta_{Q})|ZZ...ZZ| \psi(x;\theta_{G},\theta_{Q})\right\rangle \tag{1}\]
\[\text{where, }\left|\psi(x;\theta_{G},\theta_{Q})\right\rangle=V_{\theta_{Q}}U_{ \phi(Embed(x;\theta_{C}))}\left|0\right\rangle. \tag{2}\]
Here \(\theta_{G}\) and \(\theta_{Q}\) refer to GNN and VQC parameters respectively.
We follow two approaches for training our Hybrid Network: (i) with a pretrained GNN (having frozen weights), and (ii) with trainable GNN parameters. In the first approach, we first pretrain HACT-NET with a classical MLP layer and then use the learnt representation of the final layer as input to the quantum network as defined in Equations 1 and 2. Here, the parameters \(\theta_{G}\) are kept fixed after the initial pre-training stage. In the second approach, both sets of parameters are updated together. We discuss the details of second approach in section 3.3 and focus on the first approach in this section.
When trained separately, the HACT-NET performed best at 64-dimensional output of GNN passed to the MLP before the final output. However, it is very difficult to get reliable results using 64 qubits in the current available quantum devices, which both have few qubits and the qubits are noisy. Thus, we experimented with a range of dimensions and different encoding schemes to use different number of qubits on the same data. We experimented with 10-dimensional output of GNN wherein we used ZZ encoding with 2 layers of repetition[14]. Here, number of qubits used equals the dimension. We also trained with higher embedding dimensions from the HACT-NET, such as 64, 256, 512 and 1024 with amplitude encoding.
Thus, we were able to encode a 64-dimensional input in 6 qubits. With this encoding, we were able to reach the state-of-art classification F1-score that the GNN achieved. Since these classical neural networks have large number of parameters, they are known to overfit at higher dimensions in presence of less data. Since data shortage in a known limitation in most tissue imaging datasets, a key research question here is **can quantum models outperform classical models at higher dimensions where classical models tend to overfit**. In order to study this, we experimented with 256-, 512- and 1024-dimensional learnt representations of the GNN which were both passed to the classical MLP as well as the quantum classifier to study the effects of high dimensions. Using amplitude encoding we were able to encode these in 8, 9, and 10 qubits, respectively.
### Dataset
For this work, we experimented on 3 binary classification tasks under the breast cancer sub-typing problem on the BReAst Cancer Subtyping (BRACS) dataset [12]. In BRACS, each image is of the order of 2048x1536 pixels and there are \(\approx\)2200 such images. We randomly split them into 1200 for training, 500 for validation and 500 for testing.
### Training details
In this subsection we explain the details of the VQC scheme. We apply parity-postprocessing after measurement (corresponding to measuring the observable \(ZZ...Z\) on the parameterized state produced) to get the desired output and pass through a cost function. We update the parameters of the ansatz to minimize the overall cost function, much like training weights of a neural network. In the current implementation, the measurement results were interpreted based on the parity of the measurement outputs, where even parity is considered as label +1 and odd parity as -1. After obtaining labels from parity post-processing, the classical optimizer calculates the cost function, and optimizes the parameters of ansatz until the classical optimization iterations complete or until the cost function converges. For inference, we use multiple shots and the most probable label is selected as the final label for each test data. We trained our models with Constrained Optimisation By Linear Approximation (COBYLA) [13] and Nakanishi-Fujii-Todo (NFT) [9] optimizers and discuss the best results across both optimizers. The maximum number of epochs was set to 100 with early stopping. All our experiments with different data sizes are run on a noiseless state vector simulator provided by IBM Quantum.
### End-to-end training
For the end-to-end training, we train the parameters of GNN namely, \(\theta_{G}\) and VQC parameters namely \(\theta_{Q}\) together using Qiskit's TorchConnector class. We trained the above with 10-dimensional GNN embeddings using ZZ encoding for VQC. Since the classical neural networks trains using gradient based backpropagation, we use Adam optimizer for training both the networks with a learning rate of \(10^{-3}\) for VQC parameters and \(10^{-6}\) for GNN parameters. We found it useful to optimize the VQC parameters less frequently (once every 10 epochs) than the GNN parameters.
Figure 1: Implementation of hybrid GNN-VQC model
## 4 Results
In this section we present the results obtained by the hybrid quantum-classical model using different feature dimensions and embedding methods and its comparison to state-of-art classical GNN. We also present detailed ablation studies to understand the impact of training data sizes in training both classical GNN and the proposed hybrid model. Figure 2 shows the performance of classical GNN (in dark green) and hybrid quantum model (in light green) on different dimensional learnt embeddings. While at lower dimensions (10 and 64) classical GNN is able to learn better than the quantum model, the quantum model is at par with classical in higher dimensions of 256, 512 and 1024.
We further experiment in this direction to understand the difficulties in learning. While keeping the number of qubits constant, we change the encoding schemes to understand the impact of data compression. Figure 3 shows impact of classical vs quantum compression by means of changing different feature dimensions and accordingly choosing encoding schemes to represent them in quantum states. When we compress the data classically by reducing the number of output neurons to 10, 9 and 8 dimensions, we observe that although we use 10, 9 and 8 qubits respectively via ZZ encoding, the quantum model is unable to learn and struggles at a weighted F1-score of 50%. This is primarily due to the information loss in the neural network that happens during the classical compression. When the data is not classically compressed and we pass a feature representation of dimension 1024, or 512 or 256 represented by same 10, 9 and 8 qubits, then the quantum model is at par with the state-of-art classical model. Here we use amplitude encoding which encodes \(n\) classical bits in \(\log(n)\) qubits but does not lose any information, enabling the quantum model to learn better from the high dimensional data.
Since classical deep learning networks are known to under-perform in low data scenarios, we wanted to study the impact of training data for both classical and quantum models. We perform a series of experiments wherein we use 0.1, 0.25, 0.5 and then full data for training both models. As expected, in both scenarios and across all dimensions, we observe that training with full data leads to the best results on the held-out test data, and the performance comparison trend is identical to the best model with full training.
We also show the test results (weighted precision, weighted recall and weighted F1-score) on end-to-end training, in comparison with classical GNN as well as separately trained GNN+VQC in Table 1. We show that end-to-end training significantly improves over separate training of VQC and GNN, and even slightly outperforms classical GNN.
## 5 Discussions and Future Work
Overall, in this work we present two ways to train hybrid quantum-classical neural networks. We show that end-to-end training is significantly better than serially training such models and demonstrate results on a real-world breast-cancer subtyping task. In detailed ablation studies we observe that quantum compression can be significantly better to qubit requirements without information loss unlike lossy classical compression. Future directions could be to explore how other such classical networks can be combined with quantum circuits to enhance their trainability and improve generalization.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Model & w-precision & w-Recall & w-F1score \\ \hline cGNN & 0.71 & 0.69 & 0.7 \\ cGNN+VQC & 0.58 & 0.57 & 0.57 \\ end-to-end GNN+VQC & 0.72 & 0.71 & 0.72 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Table comparing end-to-end trainable networks vs classicalGNN and classicalGNN+VQC trained separately. All experiments on 10-dimensional ZZ encoding using 10 qubits on the simulator.
Figure 3: Classical compression vs Quantum compression
Figure 2: Graph showing performance (weighted F1-score) of classical GNN and hybrid quantum-classical model on different feature dimensions |
2302.05889 | USER: Unsupervised Structural Entropy-based Robust Graph Neural Network | Unsupervised/self-supervised graph neural networks (GNN) are vulnerable to
inherent randomness in the input graph data which greatly affects the
performance of the model in downstream tasks. In this paper, we alleviate the
interference of graph randomness and learn appropriate representations of nodes
without label information. To this end, we propose USER, an unsupervised robust
version of graph neural networks that is based on structural entropy. We
analyze the property of intrinsic connectivity and define intrinsic
connectivity graph. We also identify the rank of the adjacency matrix as a
crucial factor in revealing a graph that provides the same embeddings as the
intrinsic connectivity graph. We then introduce structural entropy in the
objective function to capture such a graph. Extensive experiments conducted on
clustering and link prediction tasks under random-noises and meta-attack over
three datasets show USER outperforms benchmarks and is robust to heavier
randomness. | Yifei Wang, Yupan Wang, Zeyu Zhang, Song Yang, Kaiqi Zhao, Jiamou Liu | 2023-02-12T10:32:12Z | http://arxiv.org/abs/2302.05889v1 | # USER: Unsupervised Structural Entropy-based Robust Graph Neural Network
###### Abstract
Unsupervised/self-supervised graph neural networks (GNN) are vulnerable to inherent randomness in the input graph data which greatly affects the performance of the model in downstream tasks. In this paper, we alleviate the interference of graph randomness and learn appropriate representations of nodes without label information. To this end, we propose USER, an unsupervised robust version of graph neural networks that is based on structural entropy. We analyze the property of intrinsic connectivity and define intrinsic connectivity graph. We also identify the rank of the adjacency matrix as a crucial factor in revealing a graph that provides the same embeddings as the intrinsic connectivity graph. We then introduce structural entropy in the objective function to capture such a graph. Extensive experiments conducted on clustering and link prediction tasks under random-noises and meta-attack over three datasets show USER outperforms benchmarks and is robust to heavier randomness. 1
Footnote 1: Full proof, experimental details, and code of our work is available at [https://github.com/wangyifeibeijing/USER](https://github.com/wangyifeibeijing/USER).
## 1 Introduction
Neural-based methods for processing complex graph data have become indispensable to a wide range of application areas from social media mining, recommender systems, to biological data analysis and traffic prediction. _Graph representation learning_ (GRL) plays a central role in these methods, providing vectorized graph encodings which are crucial for downstream tasks such as community detection, link prediction, node classification, and network visualization [14]. Among the many GRL methods that emerged in recent years, _graph neural network_ (GNN) [15, 16, 17, 18] provides a powerful paradigm that extracts graph encodings through a recursive aggregation scheme [16]. The aggregation scheme learns a node's embedding using both the feature of the node itself and aggregated feature of its neighbours, thereby capturing structural information of the graph. The advantage of GNN-based models has been attested by outstanding performance on many tasks [16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32].
Despite the successes above, the performance of a GNN-based model hinges on reliable input graph data [15, 16, 17]. More specifically, small perturbation on the input graph may ends in drastically different encodings after the recursive aggregation scheme (see Figure 1). However, The randomness of input graph is inevitable. As [17] discussed, the edges in the graph are formed randomly, following an underlying intrinsic connectivity distribution of nodes. [16] asserts that such distribution is highly inhomogeneous, with edges concentrated within some conglomerates of nodes, resulting in a community structure. Each conglomerate is called a community: they have dense inner-connections but sparse inter-connections. The community structure of a network can be interpreted in a probabilistic way: A fixed but unknown edge (probabilisty) distribution is presented between any pairs of nodes that determines a community structure, yet we only observe a _sample_ from this distribution. This inherent randomness triggers the perturbations in the input graph that interferes the recursive aggregation scheme.
It is therefore desirable to develop a model that captures the so-called "_intrinsic connectivity graph_", a graph that reflects the intrinsic connectivity in the dataset. A straightforward approach is to unravel the intrinsic connectivity graph from node labels that indicates the communities. However, ground truth labels are often not available in real-world applications. Our task is thus to develop an _unsupervised_ approach for unsupervised/self-supervised GNN. For this task, one needs to address the following three challenges: **(a)** The first challenge demands an operational criterion for alleviating the interference of graph randomness2. Since ground truth labels are not present, we need to find a new criterion to find a graph that mitigates the interference of graph randomness. **(b)** Given such a criterion in (a), the second challenge concerns how a graph that meets it can be learnt. More precisely, this challenge seeks for an objective function that guides to reveal a new graph that satisfies the crite
tion. **(c)** The third challenge seeks a framework to generate the new graph. Common GNN models generate graphs with the node embedding results. However, [14] assert that graphs generated from interfered node embeddings are unreliable. A desirable solution to this question would be learning embeddings and the new graph simultaneously.
For **(a)**, we will show in **Section 3** that there exist multiple "ininocuous graphs" for which GNN may produce the same embeddings as the desired intrinsic connectivity graph. We call these graphs _GNN-equivalent_. Thus any one of such innocuous graphs can help GNN mitigate the interference of randomness. As the number of groups in the intrinsic connectivity graph (the number of communities \(c\)) is known for many datasets, we justify two assertions about the innocuous graphs: For the innocuous graphs, the rank of its corresponding adjacency matrix is no less than the number of groups in the intrinsic connectivity graph. Then, if we partition innocuous graphs into groups with high concentrations of edges inside groups, and low concentrations between them, features of two nodes that in the same group should be relatively similar. These assertions direct our pursuit for innocuous graph.
For **(b)**, to reflects all assertions above, we develop a tool to learn a graph that satisfies the conditions above. In **Section 4**, we invoke structural information theory [10, 10, 11]. Through a class of _structural entropy_ measures, structural information theory recently emerges to capture intrinsic information contained within a graph structure and have been increasingly applied to graph learning. Here, we connect the notion of network partition structural information (NPSI), a type of structural entropy, with the rank of the adjacency matrix and show that minimizing structural entropy would facilitate the search for a innocuous graph.
For **(c)**, we combine the tools developed above and design a novel framework, called **U**nsupervised **S**tructural **E**ntropy-based **R**obust graph neural network (USER), to support GNN with a trainable matrix to learn the adjacency matrix of an innocuous graph. See **Section 5**. Our method makes it possible to learn the innocuous graph structure and the node embeddings simultaneously. As the embeddings are derived from the innocuous graph, rather than the input graph, they are tolerant to randomness.
We developed a series of experiments to validate our USER framework. These experiment results show that on both clustering and link prediction tasks, with the support of USER, even traditional GAE model surpasses state of the art baselines. We inject randomness into the graph. With \(50\%\) random noises, accuracy of USER improves from SOTA by up to \(14.14\%\) for clustering on well-known Cora dataset, while improvement for link prediction reaches \(13.12\%\) on Wiki dataset. Moreover, USER exhibits higher advantage in the presence of adversarial attacks. Facing \(20\%\) meta-attack [22], USER's improvements over SOTA is up to \(190.01\%\) for clustering on Citeseer. Our contributions can be summarized as follows:
* We are the first to introduce the notion of GNN-equivalent and innocuous graph. We utilize them for mitigating the interference of graph randomness;
* We proposed a structural entropy-based objective function that is suitable for learning the innocuous graph;
* We conduct extensive experiments, which show that USER performs effectively confronting randomness.
## 2 Related Work
**GRL and GNN.** Graph representation learning (GRL) generates vectorized encoding from graph data. Nowadays GRL is instrumental in many tasks that involves the analysis of graphs [1]. As a mainstream GRL paradigm, graph neural network (GNN) captures a node's structural information and node features by recursively aggregating its neighborhood information using an _information aggregation scheme_. Based on this idea, Graph Autoendoer (GAE) and Variational GAE (VGAE) [15] are developed to use GCN [15] as an encoder to learn node embeddings, and an inner product decoder to reconstruct the graph structure. As a variant of GAE, ARGA [13] trains an adversarial network to learn more robust node embeddings. To alleviate the high-frequency noises in the node features, Adaptive Graph Encoder (AGE) [16] utilizes a Laplacian smoothing filter to prepossess the node features. Different from reconstructing the graph structure, maximizing the _mutual information_ (MI) [12] is another well-studied approach for GRL. For example, the model DGI [17] employs GNN to learn node embeddings and a graph-level embedding, then maximize MI between them to improve the representations' quality. The model GIC [18] follows this idea and seeks to additionally capture community-level information of the graph structure.
Despite GNN's outstanding performance, studies have demonstrated that small perturbation on the input graph can fool the GNN [10, 11, 12]. However, the GNN-based approach is not only able to learn node embeddings, but also to learn node embeddings.
Figure 1: Modifying an edge causes more than half of the nodes’ embeddings to change after two rounds of aggregation
Kokel 2021; Wu et al. 2019b). These perturbations are inevitable, especially for unsupervised models. New learning paradigms are proposed to alleviate the influence of such perturbations. The model Cross-graph (Wang et al. 2020) maintains two autoencoders. Every encoder learns node embeddings and reconstructs adjacency matrix to be passed to the peer-autencoder as the input for the next iteration. _Graph contrastive learning (G-CL)_ models such as GCA (Zhu et al. 2021b) also improve the robustness of GNN. These methods construct data augmentation and negative pairs by modifying the input graph structure. However, none of these works explain how these perturbations were formed. In this paper, inspired by (Wang et al. 2019b; Fortunato 2010; Zhang et al. 2019; Wu et al. 2019a; Zhu and Koniusz 2020) we introduce the notion of innocuous graph to learn the embeddings same to these corresponding to the intrinsic connectivity graph, which helps GNN models to mitigate the impact of randomness.
**Structural entropy.** Structural entropy is a major tool utilized in our paper. An entropy measure has long been sought after in computer science to analyze the intrinsic information embodied in structures (Brooks Jr 2003). Several classical entropy measures have been designed for this purpose (Dehmer 2008; Anand and Bianconi 2009). In particular, the model infomap (Rosvall, Axelsson, and Bergstrom 2009) tries to analyze graphs with a form of entropy defined on random walks. In (Li and Pan 2016; Li et al. 2016), the authors re-invented structural information theory and proposed a hierarchy of _structural entropy_ to analyze networks. This notion has been utilized in several work, e.g., (Liu et al. 2019, 2022; Chen and Liu 2019), to adversarial graph learning. However, to date, no study has attempted to integrate structural entropy with enhance GNN's resilience to randomness in graph data.
## 3 Criteria to Mitigate Randomness
We use \(\vec{x},\vec{y},\ldots\) to denote vectors where \(x_{i}\) denotes the \(i\)th entry of \(\vec{x}\). We use capital letters \(X,Y,A,\ldots\) to denote real-valued matrices. For any matrix \(M\), \(M_{i}\) denotes the \(i\)th row vector and \(M_{ij}\) denotes the \((i,j)\)th entry of \(M\). In this paper, we focus on undirected, unweighted graphs where every node is associated a \(d\)-dimensional feature vector. Formally, such a graph can be denoted by \(\mathcal{G}=(\mathcal{V},\mathcal{E},X)\) where \(\mathcal{V}\) is a set of \(n\) nodes \(\{v_{1},\ldots,v_{n}\}\), \(\mathcal{E}\) is a set of edges \(\{v_{i},v_{j}\}\), and \(X\in\mathcal{M}_{n,d}(\mathbb{R})\) denotes the _feature matrix_ where \(X_{i}\) is the feature vector of node \(v_{i}\). The pair \((\mathcal{V},\mathcal{E})\) is represented by an _adjacent matrix_\(A\in\mathcal{M}_{n}(\{0,1\})\), where \(A_{ij}=1\) if \(\{v_{i},v_{j}\}\in\mathcal{E}\). Here we assume the graph does not contain any isolated node. Indeed, most studies on GNN omit isolated nodes before training (Kipf and Welling 2016; Mavro-matis and Karypis 2021). At last, we use \(\mathcal{C}_{0},\mathcal{C}_{1},\ldots\mathcal{C}_{c-1}\) to denote \(c\) sets of nodes. If they satisfy: \(k\neq m\Rightarrow\mathcal{C}_{k}\cap\mathcal{C}_{m}=\varnothing\) and \(\forall k<c\colon\mathcal{C}_{k}\neq\varnothing\), we call them partitions.
Taking input graph \(\mathcal{G}\), a _graph neural network (GNN)_ can be denoted by the function
\[\mathsf{GNN}\left(A,X,\left\{W^{(\ell)}\right\}\right)=H^{(t)} \tag{1}\]
where \(H^{(0)}=X\) and
\[H^{(\ell)}=\sigma(\text{agg}(AH^{(\ell-1)}W^{(\ell)}))\text{ for all }\ell\in(0,t],\]
\(H^{(\ell)}\in\mathbb{R}^{n\times d^{\ell}}\), \(W^{(\ell)}\in\mathbb{R}^{d^{(\ell-1)}\times d^{(\ell)}}\), and \(d^{(0)}=d\). Here \(H^{(\ell)}\) is the matrix learned by the \(\ell\)th information aggregation layer and \(H^{(0)}=X\), taking the original features as the input to the \(1\)st layer; \(\sigma(\cdot)\) is the activation function; \(\text{agg}(\cdot)\) is the aggregation; and \(W^{(\ell)}\) contains learnable parameters. GNN with non-injective \(\sigma(\cdot)\) and \(\text{agg}(\cdot)\) are inefficient when learning graph structures (Xu et al. 2019). Thus we only discuss GNN with injective \(\sigma(\cdot)\) and \(\text{agg}(\cdot)\) functions. By (1), in GNN models, the vector representation of a node is computed with not only its own features but also features of its neighbors accumulated recursively.
As mentioned above, a real-world input graph dataset is inherently random and unstable (Jin et al. 2020b). On the other hand, such datasets would reflect certain hidden but stable underlying _intrinsic connectivity_ distribution (Wang et al. 2019b; Fortunato 2010). (Fortunato 2010) asserts that for a dataset which can be naturally separated, say, into \(c\) partitions (or _classes_ in a node classificial task), intrinsic connectivity satisfies that nodes in the same partition are more likely to be connected than nodes in different partitions. We capture this intrinsic connectivity with the next definition.
**Definition 3.1** (Intrinsic connectivity graph): _For a dataset that contains \(c\) partitions, suppose \(\mathcal{G}_{I}=(\mathcal{V},\mathcal{E}_{I})\) satisfies: For any two nodes \(v_{i}\) and \(v_{j}\), there exists an edge \((v_{i},v_{j})\in\mathcal{E}_{I}\) iff \(v_{i}\) and \(v_{j}\) belong to the same partition. We call \(\mathcal{G}_{I}\) the intrinsic connectivity graph._
Let \(Rank(M)\) denote the rank of a matrix \(M\).
**Theorem 3.1** (Rank of \(\mathcal{G}_{I}\)'s adjacency matrix \(A_{I}\)): _For a dataset that contains \(c\) partitions, we have \(Rank(A_{I})=c\) where \(A_{I}\) is \(\mathcal{G}_{I}\)'s adjacency matrix._
Our aim is to extract a new graph from a real-world dataset to mitigate the interference of graph randomness. Without ground truth label, finding this intrinsic connectivity graph is impractical. However, we observe that, GNN may learn the same embeddings from different input graphs:
**Definition 3.2** (GNN-equivalent): _Let \(\mathcal{G}_{0}=(\mathcal{V},\mathcal{E}_{0},X)\) and \(\mathcal{G}_{1}=(\mathcal{V},\mathcal{E}_{1},X)\) be two graphs with the same set of nodes and adjacency matrices \(A_{0}\) and \(A_{1}\), respectively. Suppose we run GNN respectively on these two graph, and the following holds: for any feature matrix \(X\). in each layer \(\ell\), and any \(W_{0}^{(\ell)}\), there exist weights \(W_{1}^{(\ell)}\) such that:_
\[\sigma(\text{agg}(A_{0}H^{(\ell-1)}W_{0}^{(\ell)}))=\sigma(\text{agg}(A_{1}H^ {\ell-1})W_{1}^{(\ell)})).\]
_Then we call \(\mathcal{G}_{0}\) and \(\mathcal{G}_{1}\) GNN-equivalent._
By Def. 3.2, when \(\mathcal{G}_{1}\) is GNN-equivalent to \(\mathcal{G}_{0}\), a GNN with \(\mathcal{G}_{1}\) as input may learn the same embeddings as if \(\mathcal{G}_{0}\) is the input. Thus using graphs GNN-equivalent to intrinsic connectivity graph \(\mathcal{G}_{I}\) makes it possible for the GNN to learn the same embeddings as inputting \(\mathcal{G}_{I}\). We call such a graph _innovucous_.
**Definition 3.3** (innocuous graph): _Suppose \(\mathcal{G}_{I}\) is the intrinsic connectivity graph for a dataset. An innocuous graph \(\mathcal{G}^{\prime}\) is one that is GNN-equivalent to \(\mathcal{G}_{I}\)._
To search for such graphs, we introduce the necessary condition for being GNN-equivalent to a specific graph:
**Theorem 3.2** (necessary condition of GNN-equivalence): \(\mathcal{G}_{1}\) _is GNN-equivalent to \(\mathcal{G}_{0}\) only if \(\ Rank(A_{1})\geq Rank(A_{0})\)._
**Corollary 3.1** (necessary condition of innocuous graph): \(\mathcal{G}^{\prime}\) _is a innocuous graph only if \(Rank(A^{\prime})\geq Rank(A_{I})\)._
By Theorem 3.1 and Corollary 3.1, adjacency matrix \(A^{\prime}\) of innocuous graph \(\mathcal{G}^{\prime}\) satisfies \(Rank(A^{\prime})\geq c\).
Aside from the property above, we further remark on another commonly-used assumption [22, 23]: _In a graph over which a GNN may extract semantically-useful node embeddings, adjacent nodes are likely to share similar features than non-adjacent nodes._ This formulation, however, only considers information aggregation of GNN along a single edge. We now extend feature smoothness to group-level. Let \(f(\vec{x},\vec{y})\) be a function that evaluates similarity between learnt node embeddings, i.e., similarity between two embedding vectors \(\vec{x}\) and \(\vec{y}\) leads to a smaller \(f(\vec{x},\vec{y})\). We formulate group-level feature smoothness of a innocuous graph:
**Assumption 3.1** (group-level feature smoothness): _Suppose \(k\neq m\). Then for any three nodes \(v_{a},v_{b},v_{c}\) that satisfy \(v_{a}\in\mathcal{C}_{k}\), \(v_{b}\in\mathcal{C}_{k}\) and \(v_{c}\in\mathcal{C}_{m}\), we have \(f(X_{a},X_{b})\leq f(X_{a},X_{c})\)._
In the next section, we formulate an overall criterion for finding an innocuous graph, which incorporate a necessary condition (Corollary 3.1) and an auxiliary assumptions (Assumptions 3.1).
## 4 Structural Entropy-based Loss
As discussed above, our model need to learn a graph that satisfies necessary conditions (Corollary 3.1 and Assumptions 3.1) for obtaining an innocuous graph. In this section, we interpret these conditions using the language of structural information theory and formulate an optimization problem. Following recent progress on structural information theory [11], we invoke the notion of _network partition structural information (NPSI)_, which was not be used in GNN models before.
To explain NPSI, we firstly introduce the following notations: \(P(\mathcal{G})=\{\mathcal{C}_{0},\mathcal{C}_{1},\ldots\mathcal{C}_{r-1}\}\) is a _partition_ of \(\mathcal{G}\). Then, \(P(\mathcal{G})\) can be denoted by a matrix \(Y\in\{0,1\}^{n\times r}\), where \(Y_{ik}=1\) if \(v_{i}\in\mathcal{C}_{k}\) otherwise \(Y_{ik}=0\). We call \(Y\) the _indicator matrix_. Since \(\mathcal{C}_{k}\neq\varnothing\), \((Y^{T}Y)_{kk}>0\), and since \(\forall k\neq m,\mathcal{C}_{k}\cap\mathcal{V}_{m}=\varnothing\), if \(k\neq m\), \((Y^{T}Y)_{km}=0\).
For a graph \(\mathcal{G}\) and partition \(P(\mathcal{G})\), let \(vol_{k}\) be the number of edges with at least one node in \(\mathcal{C}_{k}\) and \(g_{k}\) be the number of edges with only one node in \(\mathcal{C}_{k}\). Then by [11], NPSI is:
\[NPSI_{GP(\mathcal{G})}=\sum_{k<r}\left(\frac{vol_{k}-g_{k}}{2|\mathcal{E}|} \log_{2}\frac{vol_{k}}{2|\mathcal{E}|}\right) \tag{2}\]
To utilize it in GNN models, we define a matrix form of NPSI. Note that \(vol_{k}-g_{k}\) is the number of edges with both nodes in \(\mathcal{C}_{k}\), which equals to the \(k\)-th diagonal element in \(Y^{T}AY\), while the \(k\)-th value in sum of column in \((AY)\) equals to \(vol_{k}\) and can be computed by the \(k\)-th diagonal element in \(\{1\}^{r\times n}AY\). Then let \(trace(\cdot)\) be the trace of input matrix,
\[NPSI(A, Y)=NPSI_{GP(\mathcal{G})}\] \[= \sum_{k<r}\left(\frac{vol_{k}-g_{k}}{2|\mathcal{E}|}\log_{2}\frac {vol_{k}}{2|\mathcal{E}|}\right)\] \[= \sum_{k<r}\left(\frac{(Y^{T}AY)_{kk}}{2sum(A)}\times\log_{2} \left(\frac{(\{1\}^{r\times n}AY)_{kk}}{2sum(A)}\right)\right)\] \[= \mathrm{trace}\left(\frac{Y^{T}AY}{2sum(A)}\otimes\log_{2}\left( \frac{\{1\}^{r\times n}AY}{2sum(A)}\right)\right)\]
With the definition above, \(NPSI(A,Y)\) can be incorporated into GNN. NPSI is designed to learn \(Y\) on a fixed \(\mathcal{G}\)[11]. However, if we fix an \(Y\in\{0,1\}^{n\times r}\) which satisfies \((Y^{T}Y)_{kk}>0\) and \((Y^{T}Y)_{km}=0\) for \(k\neq m\), we can learn a graph \(\mathcal{G}^{\prime}\) with corresponding adjacency matrix \(A^{\prime}\) satisfying \(Rank(A^{\prime})\geq r\):
**Theorem 4.1** (minimize \(NPSI\) with learnable \(A^{\prime}\)): \[\begin{split}\text{Suppose }A^{\prime}=&\arg\min_{A^{\prime}} \left(NPSI(A^{\prime},Y)\right),\\ & s.t.A^{\prime}_{ij}\geq 0\text{ and }A^{\prime}=A^{\prime T}, \end{split}\] (3)
\(A^{\prime}\) _satisfies: \(Rank(A^{\prime})\geq r\)_
Figure 2: The USER framework. To mitigate randomness-interference in observed graph, an innocuous graph is constructed. We optimize structural entropy-based \(\mathcal{L}_{N}\) to the learn innocuous graph.
Therefore, based on NPSI, if we set \(r=c\), we construct an objective function to learn an adjacency \(A^{\prime}\) which satisfies necessary condition Corollary 3.1. Besides this, [10] shows that by minimizing NPSI on fixed \(\mathcal{G}^{\prime}\), we can divide the graph into partitions with high inner-connectivity and sparse inter-connectivity. Specifically, when input \(\mathcal{G}^{\prime}\) is fixed, we can obtain the partition of such groups by optimizing:
\[\mathcal{C}_{k}= \{v_{i}|Y_{ik}\neq 0\}\text{ where,}\] \[Y= \arg\min_{Y}\left(NPSI(A^{\prime},Y)\right)\] \[s.t. Y\in\{0,1\}^{n\times r},\;(Y^{T}Y)_{km}\begin{cases}>0&\text{if }k=m \text{,}\\ =0&\text{otherwise.}\end{cases}\]
With the partition indicator \(Y\), we utilize the well known Davies-Bouldin index (DBI) to analyze the similarity of node features inside same group [10]:
\[\begin{split} DBI(X,Y)&=\frac{1}{r}\sum_{k<r}DI_{k}\\ \text{where: }DI_{k}&=max_{m\neq k}(R_{km})\text{, }R_{km}=\frac{S_{k}+S_{m}}{M_{km}} \\ S_{k}&=(\frac{1}{|\mathcal{C}_{k}|}\sum_{\forall i=1}(|X_{i}- \overline{X}_{k}|^{2}))^{\frac{1}{2}},\\ M_{km}&=(|\overline{X}_{k}-\overline{X}_{m}|^{2})^{\frac{1}{2}}\text{, } \overline{X}_{k}=\frac{\sum_{\forall i=1}X_{i}}{|\mathcal{C}_{k}|}.\end{split} \tag{4}\]
An adjacency matrix \(A^{\prime}\) satisfies Assumptions 3.1 would make \(DBI(X,Y)\) small. Therefore based on NPSI, we construct an objective function to learn an adjacency \(A\) which satisfies the necessary conditions (Corollary 3.1 and Assumptions 3.1) simultaneously. Let \(\beta\) be a hyper-parameter. The objective function is:
\[\begin{split}&\mathcal{L}_{N}=NPSI(A^{\prime},Y)+\beta DBI(X,Y) \\ & s.t. A^{\prime}_{ij}\geq 0,\;A^{\prime}=A^{\prime T},\\ & Y\in\{0,1\}^{n\times c},\;Y_{km}\begin{cases}>0&\text{if }k=m \text{,}\\ =0&\text{otherwise.}\end{cases}\end{split} \tag{5}\]
Then our overall criterion for finding a innocuous graph is formulated into an optimization problem of minimizing \(\mathcal{L}_{N}\) in (5), where \(Y\) and \(A^{\prime}\) are elements to be optimized.
## 5 Unsupervised Structural Entropy-based Robust Graph Neural Network
In this section, we propose new framework that facilitates GNN models to learn embeddings and innocuous graph simultaneously. This framework accomplishes robust learning task by optimizing loss in (5). Here we take classical GAE [11] as supported GNN model. We introduce it from two aspects: structure and optimization.
**Structure.** Let \(A\) denote the adjacency matrix of original input graph. To remove the effective of randomness, we construct an innocuous graph and use it as the input of the supported model instead of the original graph. We thus construct a learnable matrix \(A^{\prime}\in\mathbb{R}^{n\times n}\), and use it as the input of supported GNN model:
\[H=\mathsf{GNN}\left(A^{\prime},X,\left\{W^{(1)},W^{(2)}\right\}\right) \tag{6}\]
\(H\) is the learnt node embeddings. Besides the node embeddings, we add a softmax layer with learnable parameter matrix \(W^{Y}\in\mathbb{R}^{d^{(2)}\times c}\) to obtain the group indicator matrix \(Y\):
\[Y=softmax(HW^{Y}) \tag{7}\]
**Optimization.** Let \(\mathcal{L}_{S}\) be the loss function of supported model, e.g., for GAE:
\[\mathcal{L}_{S}= ||\hat{A}-A||_{F}^{2}, \tag{8}\]
where \(\hat{A}\) is reconstructed from learnt node embeddings by \(\hat{A}=sigmoid(HH^{T})\). Besides \(\mathcal{L}_{S}\), \(\mathcal{L}_{N}\) in (5) is employed to alleviate the interference of randomness. Thus, let \(\alpha\) be hyper-parameter, model is trained by minimizing \(\mathcal{L}\):
\[\mathcal{L}=\mathcal{L}_{N}+\alpha\mathcal{L}_{S}. \tag{9}\]
Although unsupervised, with structural entropy based \(\mathcal{L}_{N}\), this framework mitigate randomness-interference, making the supported model more capable. We call it Unsupervised Structural Entropy-based Robust Graph Neural Network (USER). The detailed structure is shown in Figure 2.
## 6 Experiments
In this section, we provide the experiments comparing the performance of USER supported GAE (denoted by USER) with other state-of-the-art baseline methods, case study, ablation study and parameter analysis.
\begin{table}
\begin{tabular}{l c c c c} \hline Dataset & \# Nodes & \# Edges & \# Features & \# Classes \\ \hline Cora & 2,708 & 5,429 & 1,433 & 7 \\ Citeseer & 3,327 & 4,732 & 3,703 & 6 \\ Wiki & 2,405 & 17,981 & 4,973 & 17 \\ \hline \end{tabular}
\end{table}
Table 1: Dataset statistics.
Figure 4: Parameter analysis on Citeseer and Wiki
Figure 3: Case study: the graph heat maps of Cora
### Experimental Settings
**Datasets**. We evaluate all models on three widely-used benchmark datasets: _Cora_, _Citeseer_, _Wiki_[12, 13]. _Cora_ and _Citeseer_ are citation networks where nodes represent publications and edges stand for citation links. Their node features are the bag-of-words vectors; _Wiki_ is a webpage network in which nodes are web pages, and edges represent hyperlinks. The node features in it are tf-idf weighted word vectors. The statistics of these datasets are in Table 1.
**Noises**. Besides the original graph in datasets, we inject noises into the graph to promote the graph randomness. In particular, we develop two types of noises: _random noise_ and _meta-attack noise_[22]. Random noise "randomly flips" the state of the chosen pair of nodes (i.e., if there is an edge between them, we remove it; otherwise we add an edge between them). The number of changed edges is the ratio of the total number of edges in original graph. In most cases, random noises are not very effective, so we create several poisoned graphs with noise ratio from \(0\%\) to \(50\%\) with a step of \(10\%\). Meta-attack noise can promote the graph randomness significantly [22]; jin2020explaining. Even for supervised models, meta-attack is hardly applied with a perturbation rate higher than \(20\%\)[13]. Thus, we create several poisoned graphs with meta-attack noise ratio from \(0\%\) to \(20\%\) with a step of \(5\%\).
**Baselines**. For **USER**, we use classical GAE [12] as its supported model. To evaluate the effectiveness, we compare it with \(10\) baselines retaining the default parameter settings in their original papers. [1] utilizes random walks
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline Dataset & USER & w.o. NPSI & w.o. DB1 & Fix \(A^{\prime}\) \\ \hline cora & 54.38 & 14.82 & 52.54 & 40.11 \\ citeseer & 37.04 & 28.95 & 12.82 & 30.94 \\ wiki & 49.7 & 48.44 & 39.77 \\ \hline \hline \end{tabular}
\end{table}
Table 6: NMI of USER’s variants with \(10\%\) random-noise
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c c c} \hline \hline Dataset & PU Rate (\%) & \multicolumn{1}{c}{degewalk} & GAE & VGAE & ARGA & GAE & DGI & GC & GAGE & GAGE\_CG & ARGA & USER \\ \hline \multirow{9}{*}{com} & 0 & 36.9\(\pm\)2.1 & 41.3\(\pm\)3.4 & 43.06 \(\pm\)2.6 & 34.3\(\pm\)2.8 & 40.52 & 36.1\(\pm\)1.3 & 38.2\(\pm\)1.0 & 52.16 \(\pm\)0.94 & 38.2\(\pm\)4.36 & 43.4\(\pm\)3.44 & 34.0\(\pm\)3.15 & **50.041 \(\pm\)2.77** \\ & 10 & 37.6\(\pm\)2.8 & 34.1\(\pm\)4.4 & 33.66 \(\pm\)3.6 & 34.3\(\pm\)1.7 & 39.5\(\pm\)1.14 & 37.7\(\pm\)3.63 & 36.5\(\pm\)1.11 & 34.07 \(\pm\)2.37 & 35.7\(\pm\)2.37 & 35.7\(\pm\)3.29 & 35.9\(\pm\)3.51 & **47.17 \(\pm\)3.22** \\ & 15 & 29.9\(\pm\)4.8 & 18.99 & 44.11 & 35.62 \(\pm\)1.0 & 38.59 \(\pm\)1.31 & 33.39 & 23.19 \(\pm\)3.29 & 21.54 \(\pm\)4.61 & 22.5\(\pm\)3.69 & 36.9\(\pm\)3.33 & **29.77 \(\pm\)3.88** \\ & 20 & 73.1\(\pm\)5.8 & 72.56 \(\pm\)2.5 & 72.90 & 7.81 \(\pm\)2.90 & 56.85 \(\pm\)3.55 & 10.11 \(\pm\)2.76 & 10.91 \(\pm\)3.97 & 30.10 \(\pm\)1.34 & 10.3\(\pm\)2.96 & **18.82 \(\pm\)2.9** \\ \hline \multirow{9}{*}{citeseer} & 30 & 10.7\(\pm\)2.1 & 22.3\(\pm\)3.4 & 22.79 \(\pm\)2.66 & 20.55 \(\pm\)2.20 & 34.06 \(\pm\)1.29 & 40.22 \(\pm\)1.89 & 39.51 \(\pm\)1.30 & 38.79 \(\pm\)3.51 & 22.67 \(\pm\)2.54 & 21.0\(\pm\)1.07 & 37.28 \(\pm\)2.04 \\ & 10 & 12.5\(\pm\)1.09 & 22.3\(\pm\)1.2 & 22.62 \(\pm\)1.62 & 21.56 \(\pm\)2.51 & 22.51 \(\pm\)2.71 & 22.51 \(\pm\)2.71 & 37.21 \(\pm\)2.62 & 13.89 \(\pm\)1.21 & 15.62 \(\pm\)2.90 & **15.60 \(\pm\)2.51** & **22.77 \(\pm\)3.31** \\ & 15 & 17.3\(\pm\)2.7 & 17.3\(\pm\)1.8 & 13.66 \(\pm\)1.95 & 13.94 \(\pm\)1.25 & 15.71 \(\pm\)2.68 & 17.68 \(\pm\)2.77 & 17.81 \(\pm\)2.68 & 12.61 \(\pm\)1.99 & 15.65 \(\pm\)2.35 & 15.61 \(\pm\)2.15 & **22.77 \(\pm\)3.31** \\ & 20 & 8.3\(\pm\)2.45 & 16.81 \(\pm\)2.01 & 37.11 \(\pm\)1.82 & 36.53 \(\pm\)1.74 & 39.11 \(\pm\)1.85 & 39.08 \(\pm\)2.17 & 7.08 \(\pm\)2.20 & 7.61 \(\pm\)2.12 & **22.42 \(\pm\)2.67** \\ & 30 & 36.7\(\pm\)1.74 & 15.00\(\pm\)1.79 & 12.92 \(\pm\)1.89 & 41.31 \(\pm\)1.85 & 39.21 \(\pm\)2.61 & 33.38 \(\pm\)1.39 & 15.82 \(\pm\)2.77 & 17.25 \(\pm\)3.05 & **36.44 \(\pm\)1.71** \\ & 10 & 29.6\(\pm\)2.74 & 17.69 \(\pm\)2.62 & 11.41 \(\pm\)3.52 & 12.48 \(\pm\)4.46 & 38.22 \(\pm\)2.02 & 22.31 \(\pm\)3.84 & 22.86 \(\pm\)2.65 & 25.86 \(\pm\)1.81 & 13.34 \(\pm\)6.03 & **10.84 \(\pm\)4.41** \(\pm\)**1.71** \\ & 15 & 14.3\(\pm\)1.8 & 1.09\(\pm\)2.99 & 4.31 \(\pm\)3.81 & 6.82 \(\pm\)3.45 & 40.99 & 12.72 \(\pm\)2.43 & 15.19 \(\pm\)3.93 & 20.14 \(\pm\)3.66 & 2.62 \(\pm\)2.81 & 1.70 \(\pm\)3.97 & **47.54 \(\pm\)1.53** \\ & 20 & 9.3\(\pm\)1.2 & 2.29\(\pm\)3.4 & 1.52 \(\pm\)3.09 & 4.51 \(\pm\)1.85 & 42.71 \(\pm\)0.98 & 12.72 \(\pm\)2.33 & 15.39 \(\pm\)3.97 & 3.11 \(\pm\)3.93 & 2.40 \(\pm\)2.21 & 7.04 \(\pm\)**4.15** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Link prediction (AUC+Std) under random-noises
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c c c} \hline \hline Dataset & PU Rate (\%) & degewalk & GAE & VGAE & ARGA & GAE & DGI & GC & GAGE\_CG & ARGA & O & USER \\ \hline \multirow{9}{*}{com} & 0 & 36.9\(\pm\)2.1 & 41.3\(\pm\)3.4 & 43.06 \(\pm\)2.6 & 43.4\(\pm\)2.5 & 41.2\(\pm\)3.4 & 26.4\(\pm\)3.1 & **37.2\(\pm\)1.0** & 52.16\(\pm\)0.94 & 38.2\(\pm\)3.4 & 43.4\(\pm\)2.69 & 43.18 \(\pm\)4.41 & 36.24 \(\pm\)1.88 \\ & 10 & 35.2\(\pm\)2.3 & 43.2\(\pm\)2
to learn embeddings. **GAE** and **VGAE**Kipf and Welling (2016) firstly leverage GCN Kipf and Welling (2017)for GRL. **ARGA**Pan et al. (2018) is an adversarial GNN model. **AGE**Cui et al. (2020) applies Laplacian smoothing to GNN. **DG1**Velickovic et al. (2019) trains GNN with MI. **GIC**Mavromatis and Karypis (2020) captures cluster-level information. **GCA**Zhu et al. (2021) is a Graph Contrastive learning GNN. **GAE_CG** and **ARGA_CG** are Cross-Graph Wang et al. (2020) models. GAE_CG is the GAE version while ARGA_CG maintains ARGA encoders. Please note that GCA and Cross-Graph are also unsupervised robust models. However we firstly introduce innocuous graph, which make USER more effective.
**Parameter Settings** We train USER for \(400\) epochs using Adam optimizer with a learning rate \(\eta\). The two hyperparameters \(\alpha\) and \(\beta\), are selected through a grid search regarding performance, a detailed analysis could be found in Subsection 6.5. The dimension \(d^{(1)}\), learning rate \(\eta\), \(\alpha\) and \(\beta\) are selected accordingly based on the parameter analysis.
**Evaluation Metrics** For node clustering, we employ popular normalized mutual information (NMI) and clustering accuracy (ACC) Aggarwal and Reddy (2014). For link prediction, we report area under the ROC (AUC) Bradley (1997), and average precision (AP) Su et al. (2015).
### Performance
**Clustering.** We compare the performance of all models in Table 2 and Table 3 All the experiments are conducted \(10\) times and the average NMI with standard deviation is reported. For each dataset, the best performance is in bold. From Table 2 and Table 3, we observe that: **(1) Original graph.** When the input graph is the original graph, the USER's improvement from GAE is significant. Different from classical GAE, the performances of USER are always close to the best. **(2) Random-noises.** When graph randomness is promoted by random noises, USER outperforms others (including GCA and Cross-Graph). Even under large noise rate e.g., \(50\%\), performance of USER only drop \(12\%\), \(3\%\), \(0.6\%\) on Cora, Citeseer and Wiki, compared with original graphs. **(3) Meta-attack.** Meta-attack seems to be more powerful, making effect of most models drop rapidly. However, USER is still more effective than others.
**Link prediction.** To compare the performances on link prediction tasks. We follow the settings in Kipf and Welling (2016): take out \(5\%\) edges from Citeseer and Wiki datasets to form the validation set and \(10\%\) edges for test set, respectively. Then we impose random-noises on the rest of the network. Classical GRL models such as GAE, ARGA and the corresponding Cross-Graph supported version are used as baselines. All the experiments are run \(10\) times and we report the AUC and AP with standard deviation in Table 4 and Table 5. The best performance is in bold. From the results, we observe that for link prediction, USER also outperforms other models. Classical models are rather unstable towards promoted randomness. Even robust model Cross-graph's performance drop drastically under large ratio noises (e.g. the ARGA_CG dropped \(3.617\%\) and \(10.400\%\) on citeseer and wiki when noise rate is \(50\%\)). USER demonstrates stability w.r.t. different noise levels (only \(0.881\%\) and \(2.085\%\) drop with \(50\%\) noise). It verifies that USER can accomplish different tasks facing graph randomness.
### Case Study
To show the graph learned by USER. We illustrate that the normalized adjacency matrix of Cora dataset without noise and rearranged vertices in Figure 3(a). It is clearly observable that most edges are in one of seven groups with few edges between them. On the other hand, the adjacency matrix with \(50\%\)-ratio random-noises of Cora (as shown in Figure 3(b)) have more inter-group edges and the boundaries of classes get visibly blurred. The learned graph structure by USER is shown in Figure 3(c). From Figure 3(c), we observe that the group-boundaries are much clearer. This demonstrates that USER can capture ideal innocuous graph.
### Ablation Study
To understand the importance of different components of our model in denoising, we conduct ablation studies on Cora, Citeseer and Wiki datasets with \(10\%\) random-noise. **NPSI:** From Table 6, USER without NPSI component loses its effectiveness on all three datasets. **DBI:** The performance of USER after removing DBI drops slightly on Cora but it is significantly affected on Wiki and Citeseer. This implies for these two datasets, feature information is more important. **Learnable \(A^{\prime}\):** If we fix \(A^{\prime}\) the same as the original input, the model tends to be disturbed by the graph randomness. The experimental result on all datasets show the effect. By incorporating all these components, USER can explore for innocuous graph and thus consistently outperforms baselines.
### Parameter Analysis
We illustrate the mechanism of USER and explore the sensitivity of the two hyper parameters. \(\alpha\) controls the influence of the objective function from supported model and \(\beta\) is used to adjust the influence of Assumption 3.1. We vary \(\alpha\) from \(0.003125\) to \(0.1\) and \(0.00625\) to \(0.2\), \(\beta\) from \(0.025\) to \(0.8\) and \(1.0\) to \(32.0\) in a \(\log\) scale of base \(2\) respectively. We report the experiment results on Wiki with \(10\%\) random-noise in Figure 4 as similar observations are made in other settings As we can see, USER's performance can be boosted when choosing appropriate values for all the hyper-parameters, but performance under values too large or too small drops slightly. This is consistent with our analysis.
## 7 Conclusion
We aim to alleviate the interference of graph randomness and learn appropriate node representations without label information. We propose USER, a novel unsupervised robust framework. Along designing it, we discovered the fact that there are multiple innocuous graphs with which GNN can learn the appropriate embeddings and introduced rank of adjacency plays a crucial role in discovering such graphs. We also introduce structural entropy as a tool to construct objective function to capture innocuous graph. In the future, we'll explore more about intrinsic connectivities of graph data.
## Acknowledgements
This research was supported by NSFC (Grant No. 61932002) and Marsden Fund (21-UOA-219). The first author and third author are supported by a PhD scholarship from China Scholarship Council.
|
2307.07840 | RegExplainer: Generating Explanations for Graph Neural Networks in
Regression Task | Graph regression is a fundamental task and has received increasing attention
in a wide range of graph learning tasks. However, the inference process is
often not interpretable. Most existing explanation techniques are limited to
understanding GNN behaviors in classification tasks. In this work, we seek an
explanation to interpret the graph regression models (XAIG-R). We show that
existing methods overlook the distribution shifting and continuously ordered
decision boundary, which hinders them away from being applied in the regression
tasks. To address these challenges, we propose a novel objective based on the
information bottleneck theory and introduce a new mix-up framework, which could
support various GNNs in a model-agnostic manner. We further present a
contrastive learning strategy to tackle the continuously ordered labels in
regression task. To empirically verify the effectiveness of the proposed
method, we introduce three benchmark datasets and a real-life dataset for
evaluation. Extensive experiments show the effectiveness of the proposed method
in interpreting GNN models in regression tasks. | Jiaxing Zhang, Zhuomin Chen, Hao Mei, Dongsheng Luo, Hua Wei | 2023-07-15T16:16:22Z | http://arxiv.org/abs/2307.07840v2 | # RegExplainer: Generating Explanations for Graph Neural Networks in Regression Task
###### Abstract.
Graph regression is a fundamental task that has gained significant attention in various graph learning tasks. However, the inference process is often not easily interpretable. Current explanation techniques are limited to understanding GNN behaviors in classification tasks, leaving an explanation gap for graph regression models. In this work, we propose a novel explanation method to interpret the graph regression models (XAIG-R). Our method addresses the distribution shifting problem and continuously ordered decision boundary issues that hinder existing methods away from being applied in regression tasks. We introduce a novel objective based on the information bottleneck theory and a new mix-up framework, which could support various GNNs in a model-agnostic manner. Additionally, we present a contrastive learning strategy to tackle the continuously ordered labels in regression tasks. We evaluate our proposed method on three benchmark datasets and a real-life dataset introduced by us, and extensive experiments demonstrate its effectiveness in interpreting GNN models in regression tasks.
graph neural network, explainability, data augmentation +
[MISSING_PAGE_POST]
exist widely in nowadays applications, such as predicting the molecular property (Beng et al., 2017) or traffic flow volume (Kang et al., 2018). Therefore, it's crucial to provide high-quality explanations for the graph regression task.
Explaining the instance-level results of graph regression in a post-hoc manner is challenging due to two main obstacles: 1. the GIB objective in previous work is not applicable in the regression task; 2; the distribution shifting problem (Han et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019).
\(\bullet\) One challenge is the mutual information estimation in the GIB objective. In previous works (Kang et al., 2018; Wang et al., 2019; Wang et al., 2019) for graph classification, the mutual information \(I(G^{*};Y)\) is estimated with the Cross-Entropy between the the predictions \(f(G^{*})\) from GNN model \(f\) and its prediction label \(Y\) from the original graph \(G\). However, in the regression task, the regression label is the continuous value, instead of categorized classes, making it difficult to be estimated with the cross entropy loss. Therefore, we adopt the GIB objective to employ the InfoNCE objective and contrastive loss to address this challenge and estimate the mutual information.
\(\bullet\) Another challenge is the distribution shifting problem, which means the explanation sub-graphs are out-of-distribution(OOD) of the original training graph dataset. As shown in Figure 1, a GNN model is trained on the original graph training set for a graph regression task. The previous work always assumes that the explanation sub-graph would contain the same mutual information as the original graph ideally. However, as seen in the figure, even when \(G\) and \(G^{*}\) both have two motifs as the label information, \(f(G^{*})\) is different from \(f(G)\) due to a different distribution. Due to the explanation sub-graph usually having a different topology and feature information compare to the original graph, the GNN model trained on the original graph set couldn't accurately predict with the explanation sub-graph, which means we couldn't directly estimate the mutual information between the original graph and the explanation sub-graph due to the distribution shifting problem.
In this paper, for the first time, we propose RegExplainer, to generate post-hoc instance-level explanations for graph regression tasks. Specifically, to address the distribution shifting issue, RegExplainer develops a mix-up approach to generate the explanation from a sub-graph into the mix-up graph without involving the label-preserving information. To capture the continuous targets in the regression task, RegExplainer also adopts the GIB objective to utilize the contrastive loss to learn the relationships between the triplets \([G,G^{+},G^{-}]\) of the graphs, where \(G\) is the target to-be-explained graph and \(G^{+}\) and \(G^{-}\) are positive and negative instances respectively. Our experiments show that RegExplainer provides consistent and concise explanations of GNN's predictions on regression tasks. We achieved up to 86.3% improvement when compared to the alternative baselines in our experiments. Our contributions could be summarized as in the following:
* To our best knowledge, we are the first to explain the graph regression tasks. We addressed two challenges associated with the explainability of the graph regression task: the mutual information estimation in the GIB objective and the distribution shifting problem.
* We proposed a novel model with a new mix-up approach and contrastive learning, which could more effectively address the two challenges, and better explain the graph model on the regression tasks compared to other baselines.
* We designed three synthetic datasets, namely BA-regression, BA-counting, and Triangles, as well as a real-world dataset called Crippen, which could also be used in future works, to evaluate the effectiveness of our regression task explanations. Comprehensive empirical studies on both synthetic and real-world datasets demonstrate that our method can provide consistent and concise explanations for graph regression tasks.
## 2. Preliminary
### Notation and Problem Formulation
We use \(G=(\mathcal{V},\mathcal{E};\mathbf{X},\mathbf{A})\) to represent a graph, where \(\mathcal{V}\) equals to \(\{v_{1},v_{2},...,v_{n}\}\) represents a set of \(n\) nodes and \(\mathcal{E}\in\mathcal{V}\times\mathcal{V}\) represents the edge set. Each graph has a feature matrix \(\mathbf{X}\in\mathbb{R}^{n\times d}\) for the nodes, wherein \(\mathbf{X}\), \(X_{i}\in\mathbb{R}^{1\times d}\) is the \(d\)-dimensional node feature of node \(v_{i}\). \(\mathcal{E}\) is described by an adjacency matrix \(\mathbf{A}\in\{0,1\}^{n\times n}\), where \(A_{ij}=1\) means that there is an edge between node \(v_{i}\) and \(v_{j}\); otherwise, \(A_{ij}=0\). For the graph prediction task, each graph \(G_{i}\) has a label \(Y_{i}\in\mathcal{C}\), where \(\mathcal{C}\) is the set of the classification categories or regression values, with a GNN model \(f\) trained to make the prediction, i.e., \(f:(\mathbf{X},\mathbf{A})\mapsto\mathcal{C}\).
Problem 1 (Post-hoc Instance-level GNN Explanation).: _Given a trained GNN model \(f\), for an arbitrary input graph \(G=(\mathcal{V},\mathcal{E};\mathbf{X},\mathbf{A})\), the goal of posthoc instance-level GNN explanation is to find a sub-graph \(G^{*}\) that can explain the prediction of \(f\) on \(G\)._
In non-graph structured data, the informative feature selection has been well studied (Kang et al., 2018), as well as in traditional methods, such as concrete auto-encoder (Chen et al., 2018), which can be directly extended to explain features in GNNs. In this paper, we focus on discovering the important sub-graph typologies following the previous work (Kang et al., 2018; Wang et al., 2019). Formally, the obtained explanation \(G^{*}\) is depicted by a binary mask \(\mathbf{M}^{*}\in\{0,1\}^{n\times n}\) on the adjacency matrix, e.g., \(G^{*}=(\mathcal{V},\mathcal{E};\mathbf{X},\mathbf{A}\odot\mathbf{M}^{*})\), \(\odot\) means elements-wise multiplication.
Figure 1. Intuitive illustration of the distribution shifting problem. The 3-dimensional map represents a trained GNN model \(f\), where \((h_{1},h_{2})\) represents the distribution of the graph in two dimensions, and \(Y\) represents the prediction value of the graph through \(f\). The red and blue lines represent the distribution of the original training set and corresponding explanation sub-graph set respectively.
The mask highlights components of \(G\) which are essential for \(f\) to make the prediction.
### GIB Objective
The Information Bottleneck (IB) (Srivastava et al., 2017; Wang et al., 2018) provides an intuitive principle for learning dense representations that an optimal representation should contain _minimal_ and _sufficient_ information for the downstream prediction task. Based on IB, a recent work unifies the most existing post-hoc explanation methods for GNN, such as GNNExplainer (Srivastava et al., 2017), PGExplainer (Peters et al., 2019), with the graph information bottleneck (GIB) principle (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). Formally, the objective of explaining the prediction of \(f\) on \(G\) can be represented by
\[\operatorname*{arg\,min}_{G^{*}}I(G;G^{*})-\alpha I(G^{*};Y), \tag{1}\]
where \(G\) is the to-be-explained original graph, \(G^{*}\) is the explanation sub-graph of \(G\), \(Y\) is the original ground-truth label of \(G\), and \(\alpha\) is a hyper-parameter to get the trade-off between minimal and sufficient constraints. GIB uses the mutual information \(I(G;G^{*})\) to select the minimal explanation that inherits only the most indicative information from \(G\) to predict the label \(Y\) by maximizing \(I(G^{*};Y)\), where \(I(G;G^{*})\) avoids imposing potentially biased constraints, such as the size or the connectivity of the selected sub-graphs (Wang et al., 2018). Through the optimization of the sub-graph, \(G^{*}\) provides model interpretation.
In graph classification task, a widely-adopted approximation to Eq. (2) in previous methods is:
\[\operatorname*{arg\,min}_{G^{*}}I(G;G^{*})+\alpha I(Y|G^{*})\approx \operatorname*{arg\,min}_{G^{*}}I(G;G^{*})+\alpha\mathrm{CE}(Y,Y^{*}), \tag{2}\]
where \(Y^{*}=f(G^{*})\) is the predicted label of \(G^{*}\) made by the model to be explained, \(f\) and the cross-entropy \(\mathrm{CE}(Y,Y^{*})\) between the ground truth label \(Y\) and \(Y^{*}\) is used to approximate \(I(G^{*};Y)\). The approximation is based on the definition of mutual information \(I(G^{*};Y)=H(Y)-H(Y|G^{*})\): with the entropy \(H(Y)\) being static and independent of the explanation process, minimizing the mutual information between the explanation sub-graph \(G^{*}\) and \(Y\) can be reformulated as maximizing the conditional entropy of \(Y\) given \(G^{*}\), which can be approximated by the cross-entropy between \(Y\) and \(Y^{*}\).
## 3. Methodology
In this section, we first introduce a new objective based on GIB for explaining graph regression learning. Then we showcase the distribution shifting problem in the GIB objective in graph regression tasks and propose a novel framework through a mix-up approach to solve the shifting problem.
### GIB for Explaining Graph Regression
As introduced in Section 2.2, in the classification task, \(I(G^{*};Y)\) in Eq. (1) is commonly approximated by cross-entropy \(\mathrm{CE}(Y^{*},Y)\)(Srivastava et al., 2017). However, it is non-trivial to extend exiting objectives for regression tasks because \(Y\) is a continuous variable and makes it intractable to compute the cross-entropy \(\mathrm{CE}(Y^{*},Y)\) or the mutual information \(I(G^{*};Y)\), where \(G^{*}\) is a graph variable with a continuous variable \(Y^{*}\) as its label.
#### 3.1.1. Optimizing the lower bound of \(I(G^{*};Y)\)
To address the challenge of computing the mutual information \(I(G^{*};Y)\) with a continuous \(Y\), we propose a novel objective for explaining graph regression. Instead of minimizing \(I(G^{*};Y)\) directly, we propose to maximize a lower bound for the mutual information by including the label of \(G^{*}\), denoted by \(Y^{*}\), and approximate \(I(G^{*};Y)\) with \(I(Y^{*};Y)\), where \(Y^{*}\) is the prediction label of \(G^{*}\):
\[\operatorname*{arg\,min}_{G^{*}}I(G;G^{*})-\alpha I(Y^{*};Y). \tag{3}\]
As shown below, \(I(Y^{*};Y)\) has the following property:
**Property 1**.: \(I(Y^{*};Y)\) _is a lower bound of \(I(G^{*};Y)\)._
Proof.: From the definition of \(Y^{*}\), we can make a safe assumption that there is a many-to-one map (function), denoted by \(h\), from \(G^{*}\) to \(Y^{*}\) as \(Y^{*}\) is the prediction label for \(G^{*}\). For simplicity, we assume a finite number of explanation instances for each label \(y^{*}\), and each explanation instance, denoted by \(g^{*}\), is generated independently. Then, we have \(p(y^{*})=\sum_{g^{*}\in\mathrm{G}(y^{*})}p(g^{*})\), where \(\mathrm{G}(y^{*})=\{g|h(g)=y^{*}\}\) is the set of explanations whose labels are \(y^{*}\).
Based on the definition of mutual information, we have:
\[I(G^{*};Y)=\int_{y}\int_{g^{*}}p_{(G^{*},Y)}(g^{*},y)\log\frac{p_{(G^{*},Y)}(g ^{*},y)}{p_{G^{*}}(g^{*})p_{Y}(y)}d_{g^{*}}d_{y}\]
Based on the definition of mutual information, we have:
\[I(G^{*};Y)=\int_{y}\int_{g^{*}}p_{(G^{*},Y^{*})}(g^{*},y)\log \frac{p_{(G^{*},Y)}(g^{*},y)}{p_{G^{*}}(g^{*})p_{Y}(y)}d_{g^{*}}d_{y}\] \[=\int_{y}\int_{g^{*}}p_{(G^{*},Y^{*},Y)}(g^{*},h(g^{*}),y)\log \frac{p_{(G^{*},Y^{*},Y)}(g^{*},h(g^{*}),y)}{p_{G^{*}}(g^{*})p_{Y}(y)}d_{g^{* }}d_{y}\] \[=\int_{y}\int_{g^{*}}\sum_{g^{*}\in\mathrm{G}(y^{*})}p_{(G^{*},Y^ {*},Y)}(g^{*},y^{*},y)\log\frac{p_{(G^{*},Y^{*},Y)}(g^{*},y^{*},y)}{p_{(G^{*},Y ^{*})}(g^{*},y^{*})p_{Y}(y)}d_{g^{*}}d_{y}\]
Based on our many-to-one assumption, while each \(g^{*}\) is generated independently, we know that if \(g\notin\mathrm{G}(y^{*})\), then \(p_{(G^{*},Y^{*},Y)}(g^{*},y^{*})=0\). Thus, we have:
\[I(G^{*};Y)=I(G^{*};Y)\] \[+\int_{y}\int_{y^{*}}\sum_{g\notin\mathrm{G}(y^{*})}p_{(G^{*},Y^ {*},Y)}(g^{*},y^{*},y)\log\frac{p_{(G^{*},Y^{*},Y)}(g^{*},y^{*},y)}{p_{(G^{*},Y ^{*})}(g^{*},y^{*})p_{Y}(y)}d_{y^{*}}d_{y}\] \[=\int_{y}\int_{y^{*}}\int_{g^{*}}p_{(G^{*},Y^{*},Y)}(g^{*},y^{*},y )\log\frac{p_{(G^{*},Y^{*},Y)}(g^{*},y^{*},y)}{p_{(G^{*},Y^{*})}(g^{*},y^{*})p_ {Y}(y)}d_{g^{*}}d_{y^{*}}d_{y}\] \[=I(G^{*},Y^{*};Y).\]
With the chain rule for mutual information, we have \(I(G^{*},Y^{*};Y)=I(Y^{*};Y)+I(G^{*};Y|Y^{*})\). Then due to the non-negativity of the mutual information, we have \(I(G^{*},Y^{*};Y)\geq I(Y^{*};Y)\)
Figure 2. Intuitive illustration about why \(I(G^{*};Y)\geq I(Y^{*};Y)\). \(G^{*}\) contains more mutual information as more overlapping area with \(Y\) than the overlapping between \(Y^{*}\) and \(Y\).
Intuitively, the property of \(I(Y^{*};Y)\) is guaranteed by the chain rule for mutual information and the independence between each explanation instance \(g^{*}\). The intuitive demonstration is shown in Figure 2. With the property, we can approximate Eq. (1) with Eq. (3).
#### 3.1.2. Estimating \(I(Y^{*};Y)\) with InfoNCE
Now the challenge becomes the estimation of the mutual information \(I(Y^{*};Y)\). Inspired by the model of Contrastive Predictive Coding (Yang et al., 2017), in which InfoNCE loss is interpreted as a mutual information estimator, we further extend the method so that it could apply to InfoNCE loss in explaining graph regression.
In our graph explanation scenario, \(I(Y^{*};Y)\) has the following property:
**Property 2**.: InfoNCE Loss is a lower bound of the \(I(Y^{*};Y)\) as shown in Eq. (4).
\[I(Y^{*};Y)\geq\operatorname*{\mathbb{E}}_{\frac{Y}{Y}}\left[\log\frac{f_{k}\;( Y^{*},Y)}{\frac{1}{|Y|}\;\sum_{Y\neq Y}f_{k}\;\left(Y^{*},Y_{j}\right)}\right] \tag{4}\]
In Eq. (4), \(Y_{j}\) is randomly sampled graph neighbors, and \(\mathbb{Y}\) is the set of the neighbors. We prove this property theoretically in the following:
Proof.: As in the InfoNCE method, the mutual information between \(Y^{*}\) and \(Y\) is defined as:
\[I(Y^{*};Y)=\sum_{Y^{*},Y}p(Y^{*},Y)\log\frac{p(Y|Y^{*})}{P(Y)} \tag{5}\]
However, the ground truth joint distribution \(p(Y^{*},Y)\) is not controllable, so, we turn to maximize the density ratio
\[f_{k}\;(Y^{*},Y)\propto\frac{p(Y|Y^{*})}{p(Y)} \tag{6}\]
We want to put the representation function of mutual information into the NCE Loss
\[\mathcal{L}_{N}=\operatorname*{\mathbb{E}}_{\frac{Y}{Y}}\log\left[\frac{f_{k} \;(Y^{*},Y)}{\sum_{Y^{\prime}\in\mathbb{Y}}(Y^{*},Y^{\prime})}\right], \tag{7}\]
where \(\mathcal{L}_{N}\) denotes the NCE loss and by inserting the optimal \(f_{k}\;(Y^{*},Y)\) into Eq. (7), we could get:
\[\mathcal{L}_{\text{contr}} =\operatorname*{\mathbb{E}}_{\frac{Y}{Y}}\log\left[\frac{\frac{ p(Y|Y^{*})}{p(Y)}}{\frac{p(Y)^{*}}{p(Y)}+\sum_{Y^{\prime}\in\mathbb{Y}_{\text{neg}}} \frac{p(Y^{*},Y^{\prime})}{p(Y^{\prime})}}\right]\] \[=\operatorname*{\mathbb{E}}_{\frac{Y}{Y}}\log\left[1+\frac{p(Y|Y^ {*})}{p(Y)}\sum_{Y^{\prime}\in\mathbb{Y}_{\text{neg}}}\frac{p(Y^{*},Y^{\prime })}{p(Y^{\prime})}\right]\] \[\approx\operatorname*{\mathbb{E}}_{\frac{Y}{Y}}\log\left[1+\frac{ p(Y|Y^{*})}{p(Y)}(N-1)\operatorname*{\mathbb{E}}_{\frac{Y}{Y}}\frac{p(Y^{*},Y^{ \prime})}{p(Y^{\prime})}\right]\] \[=\operatorname*{\mathbb{E}}_{\frac{Y}{Y}}\log\left[1+\frac{p(Y|Y^ {*})}{p(Y)}(N-1)\right]\] \[\geq\operatorname*{\mathbb{E}}_{\frac{Y}{Y}}\log\left[\frac{p(Y|Y^ {*})}{p(Y)}N\right]\] \[=-I(Y^{*},Y)+\log(N) \tag{8}\]
To employ the contrastive loss, we use the representation embedding to approximate \(Y\), where \(\mathbf{h}^{*}\) represents the embedding for \(G^{*}\) and \(\mathbf{h}\) represents the embedding for \(G\). We use \(\mathbb{H}\) to represent the neighbors set \(\mathbb{Y}\) accordingly. Thus, we approximate Eq. (3) as:
\[\operatorname*{arg\,min}_{G^{*}}I(G;G^{*})-\alpha\operatorname*{\mathbb{E}}_{ \frac{\mathbb{H}}{\mathbb{H}}}\left[\log\frac{f_{k}\;(\mathbf{h}^{*},\mathbf{h})}{ \frac{1}{|\mathbb{H}|}\;\sum_{\mathbf{h}_{j}\in\mathbb{H}}f_{k}\;\left(\mathbf{h}^{*}, \mathbf{h}_{j}\right)}\right] \tag{9}\]
### Mixup for Distribution Shifts
In the above section, we include the label of explanation sub-graph, \(Y^{*}\), in our GIB objective for explaining regression. However, we argue that \(Y^{*}\) cannot be safely obtained due to the distribution shift problem (Garon et al., 2017; Garon et al., 2017).
#### 3.2.1. Distribution Shifts in GIB
Suppose that the model to be explained, \(f\), is trained on a dataset \(\{(G_{i},Y_{i})\}_{i=1}^{N}\). Usually in supervised learning (without domain adaptation), we suppose that the examples \((G_{i},Y_{i})\) are drawn i.i.d. from a distribution \(\mathcal{D}_{\text{train}}\) of support \(G\times Y\) (unknown and fixed). The objective is then to learn \(f\) such that it commits the least error possible for labeling new examples coming from the distribution \(\mathcal{D}_{\text{train}}\). However, as pointed out in previous studies, there is a shift between the distribution explanation sub-graphs, denoted by \(\mathcal{D}_{\text{exp}}\) and \(\mathcal{D}_{\text{train}}\). As the explanation, sub-graphs tend to be small and dense. The distribution shift problem is severe in regression problems due to the continuous decision boundary (Yaron et al., 2017).
Figure 4 shows the existence of distribution shifts between \(f(G^{*})\) and \(Y\) in graph regression tasks. For each dataset, we sort the indices of the data samples according to the value of their labels, and visualize the label \(Y\), prediction \(f(G)\) of the original graph from the trained GNN model \(f\), and prediction \(f(G^{*})\) of the explanation sub-graph \(G^{*}\) from \(f\). As we can see in Figure 3, in all four graph regression datasets, the red points are well distributed around the ground-truth blue points, indicating that \(f(G)\) is close to \(Y\). In comparison, the green points shift away from the red points, indicating the shifts between \(f(G^{*})\) and \(f(G)\). Intuitively, this phenomenon indicates the GNN model \(f\) could make correct predictions only with the original graph \(G\) yet could not predict the explanation sub-graph \(G^{*}\) correctly. This is because the GNN model \(f\) is trained with the original graph sets, whereas the explanation \(G^{*}\) as the sub-graph is out of the distribution from the original graph sets. With the shift between \(f(G)\) and \(f(G^{*})\), the optimal solution in Eq. (3) is unlikely to be the optimal solution for Eq. (1).
#### 3.2.2. Graph Mix-up Approach
To address the distribution shifting issue between \(f(G)\) and \(f(G^{*})\) in the GIB objective, we introduce the mix-up approach to reconstruct a within-distribution graph, \(G^{(\text{mix})}\), from the explanation graph \(G^{*}\). We follow (Yaron et al., 2017) to make a widely-accepted assumption that a graph can be divided by \(G=G^{*}+G^{\Delta}\), where \(G^{*}\) presents the underlying sub-graph that makes important contributions to GNN's predictions, which is the expected explanatory graph, and \(G^{\Delta}\) consists of the remaining label-independent edges for predictions made by the GNN. Both \(G^{*}\) and \(G^{\Delta}\) influence the distribution of \(G\). Therefore, we need a graph \(G^{(\text{mix})}\) that contains both \(G^{*}\) and \(G^{\Delta}\), upon which we use the prediction of \(G^{(\text{mix})}\) made by \(f\) to approximate \(Y^{*}\) and \(\mathbf{h}^{*}\).
Specifically, for a target graph \(G_{a}\) in the original graph set to be explained, we generate the explanation sub-graph \(G_{a}^{*}=G_{a}-G_{a}^{\Delta}\) from the explainer. To generate a graph in the same distribution of original \(G_{a}\), we can randomly sample a graph \(G_{b}\) from the original set, generate the explanation sub-graph of \(G_{b}^{*}\) with the same explainer and retrieve its label-irrelevant graph \(G_{b}^{\Delta}=G_{b}-G_{b}^{*}\). Then we could merge \(G_{a}^{*}\) together with \(G_{b}^{\Delta}\) and produce the mix-up explanation \(G_{a}^{(\text{mix})}\). Formally, we can have \(G_{a}^{(\text{mix})}=G_{a}^{*}+(G_{b}-G_{b}^{*})\).
Since we are using the edge weights mask to describe the explanation, the formulation could be further written as:
\[\mathbf{M}_{a}^{(\text{mix})}=\mathbf{M}_{a}^{*}+(\mathbf{I}_{b}-\mathbf{M}_{b}^{*}), \tag{10}\]
where \(\mathbf{M}\) denotes the weight of the adjacency matrix and \(\mathbf{I}_{b}\) denotes the zero-ones matrix as weights of all edges in the adjacency matrix of \(G_{b}\), where \(1\) represents the existing edge and \(0\) represents there is no edge between the node pair.
We denote \(G_{a}\) and \(G_{b}\) with the adjacency matrices \(A_{a}\) and \(A_{b}\), their edge weight mask matrices as \(M_{a}\) and \(M_{b}\). If \(G_{a}\) and \(G_{b}\) are aligned graphs with the same number of nodes, we could simply mix them up with Eq. (10). However, in real-life applications, a well-aligned dataset is rare. So we use a connection adjacency matrix \(A_{\text{conn}}\) and mask matrix \(M_{\text{conn}}\) to merge two graphs with different numbers of nodes. Specifically, the mix-up adjacency matrix could be formed as:
\[\mathbf{A}_{a}^{(\text{mix})}=\left[\begin{array}{cc}\mathbf{A}_{a}&\mathbf{A}_{\text{ conn}}\\ \mathbf{A}_{\text{conn}}^{T}&\mathbf{A}_{b}\end{array}\right]. \tag{11}\]
And the mix-up mask matrix could be formed as:
\[\mathbf{M}_{a}^{(\text{mix})}=\left[\begin{array}{cc}\mathbf{M}_{a}^{*}&\mathbf{M}_{ \text{conn}}\\ \mathbf{M}_{\text{conn}}^{T}&\mathbf{M}_{b}^{\Delta}\end{array}\right] \tag{12}\]
Finally, we could form \(G_{a}^{(\text{mix})}\) as \((\mathbf{X}^{(\text{mix})},\mathbf{A}_{a}^{(\text{mix})}\odot\mathbf{M}_{a}^{(\text{mix })})\), where \(\mathbf{X}^{(\text{mix})}=[\mathbf{X}_{a};\mathbf{X}_{b}]\). We use Algorithm 1 to describe the whole process. Then we could feed \(G_{a}^{(\text{mix})}\) into the GIB objective and use it for training the parameterized explainer.
```
0: Target to-be-explained graph \(G_{a}=(\mathbf{X}_{a},\mathbf{A}_{a})\), \(G_{b}\) sampled from a set of graphs \(\mathbb{G}\), the number of random connections \(\eta\), explainer model \(E\).
0: Graph \(G^{(\text{mix})}\).
1: Generate mask matrix \(\mathbf{M}_{a}=E(G_{a})\)
2: Generate mask matrix \(\mathbf{M}_{b}=E(G_{b})\)
3: Sample \(\eta\) random connections between \(G_{a}\) and \(G_{b}\) as \(\mathbf{A}_{\text{conn}}\)
4: Mix-up adjacency matrix \(\mathbf{A}_{a}^{(\text{mix})}\) with Eq. (11)
5: Mix-up edge mask \(\mathbf{M}_{a}^{(\text{mix})}\) with Eq. (12)
6: Mix-up node features \(\mathbf{X}^{(\text{mix})}=[\mathbf{X}_{a};\mathbf{X}_{b}]\)
7:return\(G^{(\text{mix})}=(\mathbf{X}^{(\text{mix})},\mathbf{A}_{a}^{(\text{mix})}\odot\mathbf{M}_{a}^{( \text{mix})})\)
```
**Algorithm 1** Graph Mix-up Algorithm
We show that our mix-up approach has the following property: **Property 3**. \(G^{(\text{mix})}\) is within the distribution of \(\mathcal{D}_{\text{train}}\).
Following the previous work, we denote a graph \(G=G^{*}+G^{\Delta}\), where \(G^{*}\) is the sub-graph explanation and \(G^{\Delta}\) is the label-irrelevant graph. A common acknowledgment is that for a graph \(G\) with label \(Y\), the explanation \(G^{*}\) holds the label-preserving information, which is the important sub-graph, while \(G^{\Delta}\) also holds useful information which makes sure connecting it with \(G^{*}\) would maintain the distribution of the graph and not lead to another label. We denote the distribution for the graphs as \(G\sim\mathcal{D}_{\text{train}}=\mathbb{P}_{\tilde{\mathcal{G}}}\), \(G^{*}\sim\mathbb{P}_{\tilde{\mathcal{G}}^{*}}\), \(G^{\Delta}\sim\mathbb{P}_{\tilde{\mathcal{G}}^{*}}\), where \(\mathcal{D}_{\text{train}}\) means the distribution of train
Figure 3. Illustration of RegExplainer. \(G\) is the to-be-explained graph, \(G^{+}\) is the randomly sampled positive graph and \(G^{-}\) is the randomly sampled negative graph. The explanation of the graph is produced by the explainer model. Then graph \(G\) is mixed with \(G^{+}\) and \(G^{-}\) respectively to produce \(G^{(\text{mix})+}\) and \(G^{(\text{mix})-}\). Then the graphs are fed into the trained GNN model to retrieve the embedding vectors \(E(G^{+})\), \(E(G^{-})\), \(E(G^{(\text{mix})-})\) and \(E(G^{(\text{mix})-})\). We use contrastive loss to minimize the distance between \(G^{(\text{mix})+}\) and the positive sample and maximum the distance between \(G^{(\text{mix})-}\) and the negative sample. The explainer is trained with the GIB objective and contrastive loss.
dataset. When we produce \(G^{\text{(mix)}}\), we independently sample a label-irrelevant graph \(G^{\Delta}\) and mix it up with target explanation \(G^{*}\). So, we could write the distribution of \(G^{\text{(mix)}}\) as :
\[G^{\text{(mix)}}\sim\mathbb{P}_{\mathcal{G^{*}}}*\mathbb{P}_{\mathcal{G^{\Delta }}}=\mathbb{P}_{(\mathcal{G^{*}}+\mathcal{G^{\Delta}})}=\mathbb{P}_{\mathcal{G }}=\mathcal{D}_{\text{train}} \tag{13}\]
Thus, we prove that \(G^{\text{(mix)}}\) is within the distribution of \(\mathcal{D}_{\text{train}}\).
### Implementation
#### 3.3.1. Implementation of InfoNCE Loss
After generating the mix-up explanation \(G^{\text{(mix)}}\), we specify the contrastive loss to further train the parameterized explainer with a triplet of graphs \((G,G^{+},G^{-})\) as the implementation of InfoNCE Loss in Eq. (9). Intuitively, for each target graph \(G\) with label \(Y\) to be explained, we can define two randomly sampled graphs as positive graph \(G^{+}\) and negative instance \(G^{-}\) where \(G^{+}\)'s label, \(Y^{+}\) is closer to \(Y\) than \(G^{-}\)'s label, \(Y^{-}\), i.e., \(|Y^{+}-Y|<|Y^{-}-Y|\). Therefore, the distance between the distributions of the positive pair \((G,G^{+})\) should be smaller than the distance between the distributions of the negative pair \((G,G^{-})\).
In practice, \(G^{+}\) and \(G^{-}\) are randomly sampled from the graph dataset, upon which we calculate their similarity score with the target graph \(G\). The sample with a higher score would be the positive sample and the other one would be the negative sample. Specifically, we use \(\text{sim}(\mathbf{h},\mathbf{h}_{j})=\mathbf{h}^{T}\mathbf{h}_{j}\) to compute the similarity score, where \(G_{j}\) could be \(G^{+}\) or \(G^{-}\). \(\mathbf{h}\) is generated by feeding \(G\) into the GNN model \(f\) and directly retrieving the embedding vector before the dense layers ; \(\mathbf{h}^{+}\) and \(\mathbf{h}^{-}\) denote the embedding vectors for \(G^{+}\) and \(G^{-}\) respectively.
Learning through the triplet instances could effectively reinforce the ability of the explainer to learn the explanation self-supervised. A similar idea goes with mixup graphs which we propose to address the distribution shifts in GIB. After we mix up target graph \(G\) with the neighbors \(G^{+}\) and \(G^{-}\) and get \(G^{\text{(mix)+}}\) and \(G^{\text{(mix)-}}\) respectively, the distance between \(G^{\text{(mix)+}}\), \(G^{+}\) should be smaller than the distance between \(G^{\text{(mix)-}}\), \(G^{-}\). For example, given graph \(G\) with a label \(2.0\), \(G^{+}\) with a label \(1.0\), and \(G^{-}\) with a label \(100.0\), after we mix the explanation of \(G\), which is \(G^{*}\), with \(G^{+}\) and \(G^{-}\) respectively, following Eq. (10), the prediction label of \(G^{\text{(mix)+}}\) and \(G^{\text{(mix)-}}\) should be closer to \(1\) because they contain the label-preserving sub-graph \(G^{*}\), which could be represented as \(|f(G^{\text{(mix)-}})-f(G^{-})|>|f(G^{\text{(mix)+}})-f(G^{+})|\), where \(f(G)\) represents the prediction label of graph \(G\).
Formally, given a target graph \(G\), the sampled positive graph \(G^{+}\) and negative graph \(G^{-}\), we formulate the contrastive loss in Eq. (9) as the following:
\[\begin{split}\mathcal{L}_{\text{contr}}(G,G^{+},G^{-})& =\log(1+\exp(\mathbf{h}^{T}\mathbf{h}^{-}-\mathbf{h}^{T}\mathbf{h}^{+}))\\ &=-\log\frac{\exp(\mathbf{h}^{T}\mathbf{h}^{+})}{\exp(\mathbf{h}^{T}\mathbf{h}^{ +})+\exp(\mathbf{h}^{T}\mathbf{h}^{-})}\end{split} \tag{14}\]
where \(\exp()\) function is used to instantiate the density ratio function \(f_{k}\), the denominator is a sum over the ratios of both positive and negative samples in the triplet \(\mathbb{Y}\).
#### 3.3.2. Size Constraints
We optimize \(I(G;G^{+})\) in Eq. (9) to constraint the size of the explanation sub-graph \(G^{*}\). The upper bound of \(I(G;G^{*})\) is optimized as the estimation of the KL-divergence between the probabilistic distribution between the \(G^{+}\) and \(G\), where the KL-divergence term can be divided into two parts as the entropy loss and size loss (Koren et al., 2017). In practice, we follow the previous work (Koren et al., 2017; Li et al., 2019; Li et al., 2019) to implement them. Specifically,
\[\mathcal{L}_{\text{size}}(G,G^{*})=\gamma\sum_{(i,j)\in\mathcal{E}}(M^{*}_{ij}) -\log\delta(\mathbf{h}\mathbf{h}^{T}). \tag{15}\]
\(\sum\limits_{(i,j)\in\mathcal{E}}(M^{*}_{ij})\) means sum the weights of the existing edges in the edge weight mask \(\mathbf{M}^{*}\) for the explanation \(G^{*}\); \(\mathbf{h}=f_{\text{bd}}(G^{*})\), where we extract the embedding of the graph \(G^{*}\) before the GNN model \(f\) transforming it into prediction \(Y^{*}\) and \(\gamma\) is the weight for the size of the masked graph.
#### 3.3.3. Overall Objective Function
In practice, the denominator in Eq. (9) works as a regularization to avoid trivial solutions. Since the label \(Y=f(G)\) is given and independent of the optimization process, we could also employ the MSE loss between \(Y^{*}\) and \(Y\) additionally, regarding InfoNCE loss only estimates the mutual information between the embeddings. Formally, the overall loss function could be implemented as:
\[\mathcal{L}_{\text{GIB}} =\mathcal{L}_{\text{size}}(G,G^{+})-\alpha\mathcal{L}_{\text{contr }}(G,G^{+},G^{-}) \tag{17}\] \[\mathcal{L} =\mathcal{L}_{\text{GIB}}+\beta\mathcal{L}_{\text{MSE}}(f(G),f(G^{ \text{(mix)+}})), \tag{16}\]
where \(G^{\text{(mix)+}}\) means mix \(G^{*}\) with the positive sample \(G^{+}\) and the hyper-parameters are \(\alpha\) and \(\beta\).
#### 3.3.4. Detailed Description of Algorithm 2
Algorithm 2 shows the training phase for the explainer model \(E\). For each epoch and each to-be-explained graph, we first randomly sample two neighbors \(G_{b}\) and \(G_{c}\), then we decide the positive sample \(G^{+}\) and negative sample \(G^{-}\) according to the similarity between \((G,G_{b})\) and \((G,G_{c})\). We generate the explanation for graphs and mix \(G\) with \(G^{+}\) and \(G^{-}\) respectively with the Algorithm 1. We calculate the contrastive loss for triplet \((G,G^{+},G^{-})\) with Eq. (14) and the GIB loss, which
contains the size loss and contrastive loss. We also calculate the MSE loss between \(f(G^{\text{(mix)+}})\) and \(f(G)\). The overall loss is the sum of GIB loss and MSE loss. We update the trainable parameters in the explainer with the overall loss.
#### 3.3.5. Computational Complexity Analysis
In the implementation, we transform the structure of the graph data from the sparse adjacency matrix representation into the dense edges list representation. We analyze the computational complexity of our mix-up approach here. According to Algorithm 1, given a graph \(G_{a}\) and a randomly sampled graph \(G_{b}\), assuming \(G_{a}\) contains \(M_{a}\) edges and \(G_{b}\) contains \(M_{b}\) edges, the complexity of graph extension operation on edge indices and masks, which extend the size of them from \(M_{a}\), \(M_{b}\) to \(M_{a}+M_{b}\), is \((2(M_{a}+M_{b}))\), where \(M_{a}>0\) and \(M_{b}>0\). To generate \(\eta\) cross-graph edges, the computational complexity is \(\mathcal{O}(n)\). For the mix-up operation, the complexity is \(\mathcal{O}(2(M_{a}+M_{b})+\eta)\). Since \(\eta\) is usually a small constant, the time complexity of our mix-up approach is \(\mathcal{O}(2*M_{a}+2*M_{b})\). We use \(M\) to denote the largest number of edges for the graph in the dataset and the time complexity of mix-up could be simplified to \(\mathcal{O}(M)\).
## 4. Experiments
In this section, we will introduce our datasets, experiment settings, and the results. Our experiments show that RegExplainer provides consistent and concise explanations of GNN's predictions on regression tasks. On three synthetic datasets and real-life dataset Crippen, we show that RegExplainer accurately identifies the important sub-graphs/motits that determine the graph label and outperforms alternative baselines by up to 86.3% in explanation accuracy (AUC).
* RQ1: How does the RegExplainer perform compared to other baselines on the four datasets?
* RQ2: How does each component of the proposed approach affect the performance of RegExplainer?
* RQ3: How does the proposed approach perform under different hyper-parameters?
### Datasets and Setups
In this section, we will introduce how we formulate our datasets and their specific configurations. (1) _BA-Motif-Volume_: This dataset is based on the BA-shapes (Zhu et al., 2017) and makes a modification, which is adding random float values from [0.00, 100.00] as the node feature. We then sum the node values on the motif as the regression label of the whole graph, which means the GNNs should recognize the [house] motif and then sum features to make the prediction. (2) _BA-Motif-Counting_: Different from BA-Motif-Volume, where node features are summarized, in this dataset, we attach various numbers of motifs to the base BA random graph and pad all graphs to equal size. The number of motifs is counted as the regression label. Padding graphs to the same size could prevent the GNNs from making trivial predictions based on the total number of nodes. (3) _Triangles_: We follow the previous work (Beng et al., 2019) to construct this dataset. The dataset is a set of 5000 Erdos-Renyi random graphs denoted as \(ER(m,p)\), where \(m=30\) is the number of nodes in each graph and \(p=0.2\) is the probability for an edge to exist. The size of 5000 was chosen to match the previous work. The regression label for this dataset is the number of triangles in a graph and GNNs are trained to count the triangles. (4) _Crippen_: The Crippen dataset is a real-life dataset that was initially used to evaluate the graph regression task. The dataset has 1127 graphs reported in the Delaney solubility dataset (D
We directly remove the contrastive loss term but still maintain the mix-up processing and MSE loss. (3) \(\text{RegE}^{-\text{mse}}\): We directly remove the MSE loss computation item from the objective function.
Additionally, we set all variants with the same configurations as original RegExplainer, including learning rate, training epochs, and hyper-parameters \(\eta\), \(\alpha\), and \(\beta\). We trained them on all four datasets and conduct the results in Figure 5. We observed that the proposed RegExplainer outperforms its variants in all datasets, which indicates that each component is necessary and the combination of them is effective.
### Hyper-parameter Sensitivity Study (RQ3)
In this section, we investigate the hyper-parameters of our approach, which include \(\alpha\) and \(\beta\), across all four datasets. The hyper-parameter \(\alpha\) controls the weight of the contrastive loss in the GIB objective while the \(\beta\) controls the weight of the MSE loss. We determined the optimal values of \(\alpha\) and \(\beta\) by tuning it within the \([0.001,1000]\) range, using a 3X increment for each step like [0.1, 0.3, 1, 3...]. When we tune one hyper-parameter, another one is set to be 1.0. The experimental results can be found in Figure 6. Our findings indicate that our approach, RegExplainer, is stable and
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Dataset & BA-Motif-Volume & BA-Motif-Counting & Triangles & Crippen \\ \hline Original Graph \(G\) & & & & \\ Explanation \(G^{*}\) & & & & \\ Node Feature & Random Float Vector & Fixed Ones Vector & Fixed Ones Vector & One-hop Vector \\ Regression Label & Sum of Motif Value & Count Motifs & Count Triangles & Chemical Property Value \\ Explanation Type & Fix Size Sub-Graph & Dynamic Size Sub-graph & Dynamic Size Sub-graph & Dynamic Size Sub-graph \\ \hline Explanation AUC & & & & \\ GRAD & \(0.418\pm 0.000\) & \(0.516\pm 0.000\) & \(0.479\pm 0.000\) & \(0.426\pm 0.000\) \\ ATT & \(0.512\pm 0.005\) & \(0.517\pm 0.003\) & \(0.441\pm 0.004\) & \(0.502\pm 0.006\) \\ GNNExplainer & \(0.501\pm 0.009\) & \(0.496\pm 0.003\) & \(0.500\pm 0.002\) & \(0.497\pm 0.005\) \\ PGExplainer & \(0.470\pm 0.057\) & \(0.000\pm 1.56\) & \(0.511\pm 0.028\) & \(0.448\pm 0.005\) \\ \hline
**RegExplainer** & \(0.758\pm 0.177\) & \(0.963\pm 0.011\) & \(0.739\pm 0.008\) & \(0.553\pm 0.013\) \\ \hline \hline \end{tabular}
\end{table}
Table 1. Illustration of the graph regression dataset together with the explanation faithfulness in terms of AUC-ROC on edges under four datasets on RegExplainer and other baselines. The original graph row visualize the structure of the complete graph, the explanation row highlight the explanation sub-graph of the corresponding original graph. In Crippen dataset, different color of the node represent different kind of atoms and the node feature is one-hot vector to encode the atom type.
Figure 4. Visualization of distribution shifting problem on four graph regression datasets. The points represent the regression value, where the blue points mean label \(y\), red points mean prediction \(f(G)\) of the original graph, and the blue points mean prediction \(f(G^{s})\) of explanations on the four datasets, the x-axis is the indices of the graph, sorted by the value of the label \(Y\).
robust when using different hyper-parameter settings, as evidenced by consistent performance across a range.
### Study the Decision Boundary and the Distributions Numerically
In this section, we visualize the regression values of the graphs and calculate the prediction shifting distance for each dataset and analyze their correlations to the distance of the decision boundaries. We put our results into Figure 7 and Table 2.
We observed that in Figure 7, the red points distribute surround the blue points but the green points are shifted away, which indicates that the explanation sub-graph couldn't help GNNs make correct predictions. As shown in Table 2, we calculate the RMSE score between [the] \(f(G)\) and \(Y\), \(f(G^{*})\) and \(Y\), \(f(G)\) and \(f(G^{*})\) respectively, where \(f(G)\) is the prediction the original graph, \(f(G^{*}\) is the prediction of the explanation sub-graph, and \(Y\) is the regression label. We could observe that \(f(G^{*})\) shows a significant prediction shifting from \(f(G)\) and \(Y\), indicating that the mutual information calculated with the original GIB objective Eq.(2) would be biased.
We further explore the relationship of the prediction shifting against the label value with dataset BA-Motif-Volume, which represents the semantic decision boundary. In Figure 7, each point represents a graph instance, where \(Y\) represents the ground-truth label, and \(\Delta\) represents the absolute value difference. It's clear that both the \(\Delta(f(G^{*}),Y)\) and \(\Delta(f(G),f(G^{*}))\) strongly correlated to \(Y\) with statistical significance, indicating the prediction shifting problem is related to the continuous ordered decision boundary, which is present in regression tasks.
## 5. Related Works
**GNN explainability:** The explanation methods for GNN models could be categorized into two types based on their granularity: instance-level (Song et al., 2016; Wang et al., 2017; Wang et al., 2018; Wang et al., 2019) and model-level (Wang et al., 2019), where the former methods explain the prediction for each instance by identifying important sub-graphs, and the latter method aims to understand the global decision rules captured by the GNN. These methods
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Dataset & \((f(G),Y)\) & \((f(G^{*}),Y)\) & \((f(G),f(G^{*}))\) \\ \hline BA-Motif-Volume & 131.42 & 1432.07 & 1427.07 \\ BA-Motif-Counting & 0.11 & 14.30 & 14.28 \\ Triangles & 5.28 & 12.38 & 12.40 \\ Crippen & 1.13 & 1.54 & 1.17 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Prediction shifting study on the RMSE of \((f(G),Y)\), \((f(G),f(G^{*}))\) respectively.
Figure 5. Ablation study of RegExplainer. We evaluated the performance of original RegExplainer and its variants that exclude mix-up approach, contrastive loss, or MSE loss respectively. The X-axis indicates the various datasets, while the bars represent the AUC score, and the black solid line shows the standard deviation value.
Figure 6. Hyper-parameters study of \(\alpha\) and \(\beta\) on four datasets with RegExplainer. In both figures, the x-axis is the value of different hyper-parameter settings and the y-axis is the value of the average AUC score over ten runs with different random seeds.
could also be classified into two categories based on their methodology: self-explainable GNNs (Bahdan et al., 2019; Chen et al., 2020) and post-hoc explanation methods (Zhu et al., 2019; Zhang et al., 2020; Zhang et al., 2020), where the former methods provide both predictions and explanations, while the latter methods use an additional model or strategy to explain the target GNN. Additionally, CGE (Hua et al., 2019) (cooperative explanation) generates the sub-graph explanation with the sub-network simultaneously, by using cooperative learning. However, it has to treat the GNN model as a white-box, which is usually unavailable in the post-hoc explaining.
Existing methods haven't explored the explanation of the graph regression task and haven't considered two important challenges: the distribution shifting problem and the limitation of the GIB objective, which were addressed by our work.
**Mix-up approach:** We faced the challenge of the distribution shifting problem and adopted the mix-up (Zhu et al., 2019) approach in our work. The mix-up approach is a data augmentation technique that increases the diversity of the training data and improves the generalization performance of the model. There are also many mix-up related technologies including GraphMix (Wang et al., 2019), MixupGraph (Zhu et al., 2019), G-Mixup (Zhu et al., 2019), and ifMixup (Zhu et al., 2019). However, existing methods couldn't address the distribution shifting problem in graph regression tasks and improve the explainability of GNN due to their inability to generate graphs into original distributions, which highlights the need for a new mix-up method. Thus, we develop a new mix-up approach in our work.
## 6. Conclusion
We addressed the challenges in the explainability of graph regression tasks and proposed the RegExplainer, a novel method for explaining the predictions of GNNs with the post-hoc explanation sub-graph on graph regression task without requiring modification of the underlying GNN architecture or re-training. We showed how RegExplainer can leverage the distribution shifting problem and knowledge from the continuous decision boundary with the mix-up approach and the adopted GIB objective with the contrastive loss, while the problems seriously affect the performances of other explainers. We formulated four new datasets, which are BA-Motif-Volume, BA-Motif-Counting, Triangles, and Crippen for evaluating the explainers on the graph regression task, which are developed from the previous datasets and follow a similar setting. They could also benefit future studies on the XAIG-R.
|
2307.13821 | Fitting Auditory Filterbanks with Multiresolution Neural Networks | Waveform-based deep learning faces a dilemma between nonparametric and
parametric approaches. On one hand, convolutional neural networks (convnets)
may approximate any linear time-invariant system; yet, in practice, their
frequency responses become more irregular as their receptive fields grow. On
the other hand, a parametric model such as LEAF is guaranteed to yield Gabor
filters, hence an optimal time-frequency localization; yet, this strong
inductive bias comes at the detriment of representational capacity. In this
paper, we aim to overcome this dilemma by introducing a neural audio model,
named multiresolution neural network (MuReNN). The key idea behind MuReNN is to
train separate convolutional operators over the octave subbands of a discrete
wavelet transform (DWT). Since the scale of DWT atoms grows exponentially
between octaves, the receptive fields of the subsequent learnable convolutions
in MuReNN are dilated accordingly. For a given real-world dataset, we fit the
magnitude response of MuReNN to that of a well-established auditory filterbank:
Gammatone for speech, CQT for music, and third-octave for urban sounds,
respectively. This is a form of knowledge distillation (KD), in which the
filterbank ''teacher'' is engineered by domain knowledge while the neural
network ''student'' is optimized from data. We compare MuReNN to the state of
the art in terms of goodness of fit after KD on a hold-out set and in terms of
Heisenberg time-frequency localization. Compared to convnets and Gabor
convolutions, we find that MuReNN reaches state-of-the-art performance on all
three optimization problems. | Vincent Lostanlen, Daniel Haider, Han Han, Mathieu Lagrange, Peter Balazs, Martin Ehler | 2023-07-25T21:20:12Z | http://arxiv.org/abs/2307.13821v1 | # Fitting Auditory Filterbanks with Multiresolution Neural Networks
###### Abstract
Waveform-based deep learning faces a dilemma between nonparametric and parametric approaches. On one hand, convolutional neural networks (convnets) may approximate any linear time-invariant system; yet, in practice, their frequency responses become more irregular as their receptive fields grow. On the other hand, a parametric model such as LEAF is guaranteed to yield Gabor filters, hence an optimal time-frequency localization; yet, this strong inductive bias comes at the detriment of representational capacity. In this paper, we aim to overcome this dilemma by introducing a neural audio model, named multiresolution neural network (MuReNN). The key idea behind MuReNN is to train separate convolutional operators over the octave subbands of a discrete wavelet transform (DWT). Since the scale of DWT atoms grows exponentially between octaves, the receptive fields of the subsequent learnable convolutions in MuReNN are dilated accordingly. For a given real-world dataset, we fit the magnitude response of MuReNN to that of a well-established auditory filterbank: Gammatone for speech, CQT for music, and third-octave for urban sounds, respectively. This is a form of knowledge distillation (KD), in which the filterbank "teacher" is engineered by domain knowledge while the neural network "student" is optimized from data. We compare MuReNN to the state of the art in terms of goodness of fit after KD on a hold-out set and in terms of Heisenberg time-frequency localization. Compared to convnets and Gabor convolutions, we find that MuReNN reaches state-of-the-art performance on all three optimization problems.
Vincent Lostanlen\({}^{1}\), Daniel Haider\({}^{2,3}\), Han Han\({}^{1}\), Mathieu Lagrange\({}^{1}\), Peter Balazs\({}^{2}\), and Martin Ehler\({}^{3}\)\({}^{1}\) Nantes Universite, Ecole Centrale Nantes, CNRS, LS2N, UMR 6004, F-44000 Nantes, France.
\({}^{2}\) Acoustics Research Institute, Austrian Academy of Sciences, A-1040 Vienna, Austria.
\({}^{3}\) University of Vienna, Department of Mathematics, A-1090 Vienna, Austria.
Convolutional neural network, digital filters, filterbanks, multiresolution analysis, psychoacoustics.
## 1 Introduction
Auditory filterbanks are time-invariant systems whose design takes inspiration from domain-specific knowledge in hearing science [1]. For example, the critical bands of the human cochlea inspires frequency scales such as mel, bark, and ERB [2]. The phenomenon of temporal masking calls for asymmetric impulse responses, motivating the design of Gammatone filters [3]. Lastly, the constant-\(Q\) transform (CQT), in which the number of filters per octave is fixed, reflects the principle of octave equivalence in music [4].
In recent years, the growing interest for deep learning in signal processing has proposed to learn filterbanks from data rather than design them a priori [5]. Such a replacement of feature engineering to feature learning is motivated by the diverse application scope of audio content analysis: i.e., conservation biology [6], urban science [7], industry [8], and healthcare [9]. Since these applications differ greatly in terms of acoustical content, the domain knowledge which prevails in speech and music processing is likely to yield suboptimal performance. Instead, gradient-based optimization has the potential to reflect the spectrotemporal characteristics of the data at hand.
Enabling this potential is particularly important in applications where psychoacoustic knowledge is lacking; e.g., animals outside of the mammalian taxon [10, 11]. Beyond its perspectives in applied science, the study of learnable filterbanks has value for fundamental research on machine listening with AI. This is because it represents the last stage of progress towards general-purpose "end-to-end" learning, from the raw audio waveform to the latent space of interest.
Yet, success stories in waveform-based deep learning for audio classification have been, up to date, surprisingly few--and even fewer beyond the realm of speech and music [12]. The core hypothesis of our paper is that this shortcoming is due to an inadequate choice of neural network architecture. Specifically, we identify a dilemma between nonparametric and parametric approaches, where the former are represented by convolutional neural networks (convnets) and the latter by architectures used in SincNet [13] or LEAF [14]. In theory, convnets may approximate any finite impulse response (FIR), given a receptive field that is wide enough; but in practice, gradient-based optimization on nonconvex objectives yields suboptimal solutions [12]. On the other hand, the parametric approaches enforce good time-frequency localization, yet at the cost of imposing a rigid shape for the learned filters: cardinal sine (inverse-square envelope) for SincNet and Gabor (Gaussian envelope) for LEAF.
Our goal is to overcome this dilemma by developing a neural audio model which is capable of learning temporal envelopes from data while guaranteeing near-optimal time-frequency localization. In doing so, we aim to bypass the explicit incorporation of psychoacoustic knowledge as much as possible. This is unlike state-of-the-art convnets for filterbank learning such as SincNet or LEAF, whose parametric kernels are initialized according to a mel-frequency scale. Arguably, such careful initialization procedures defeat the purpose of deep learning; i.e., to spare the human effort of feature engineering.
Figure 1: Graphical outline of the proposed method. We train a neural network “student” \(\mathbf{\Phi_{\text{W}}}\) to regress the squared magnitudes \(\mathbf{Y}\) of an auditory filterbank “teacher” \(\mathbf{\Lambda}\) in terms of spectrogram-based cosine distance \(\mathcal{L}_{\mathbf{\kappa}}\), on average over a dataset of natural sounds \(\mathbf{x}\).
Furthermore, it contrasts with other domains of deep learning (e.g., image processing) in which all convnet layers are simply initialized with i.i.d. Gaussian weights [15].
Prior work on this problem has focused on advancing the state of the art on a given task, sometimes to no avail [16]. In this article, we take a step back and formulate a different question: before we try to outperform an auditory filterbank, can we replicate its responses with a neural audio model? To answer this question, we compare different "student" models in terms of their ability to learn from a black-box function or "teacher" by knowledge distillation (KD).
Given an auditory filterbank \(\mathbf{\Lambda}\) and a discrete-time signal \(\mathbf{z}\) of length \(T\), let us denote the squared magnitude of the filter response at frequency bin \(f\) by \(\mathbf{Y}[f,t]=|\mathbf{\Lambda}\mathbf{x}|^{2}[f,2^{J}t]\), where \(2^{J}\) is the chosen hop size or "stride". Then, given a model \(\Phi_{\mathbf{W}}\) with weights \(\mathbf{W}\), we evaluate the dissimilarity between teacher \(\mathbf{\Lambda}\) and student \(\mathbf{\Phi}_{\mathbf{W}}\) as their (squared) spectrogram-based cosine similarity \(\mathcal{L}_{\mathbf{\pi}}(\mathbf{W})\). The distance of student and teacher in this similarity measure can be computed via the \(L^{2}\) distance after normalizing across frequency bins \(f\), independently for each time \(t\). Let \(\left|\widehat{\mathbf{\Phi}}_{\mathbf{W}}\mathbf{z}\right|^{2}\) and \(\widetilde{\mathbf{Y}}\) denote these normalized versions of student and teacher, then
\[\mathcal{L}_{\mathbf{\pi}}(\mathbf{W}) =\mathrm{costist}\big{(}|\mathbf{\Phi}_{\mathbf{W}}|^{2},\mathbf{ Y}\big{)}\] \[=\frac{1}{2}\sum_{t=1}^{T/2^{J}}\sum_{f=1}^{F}\big{|}|\widetilde {\mathbf{\Phi}}_{\mathbf{W}}\mathbf{x}|^{2}[f,t]-\widetilde{\mathbf{Y}}[f,t] \big{|}^{2}, \tag{1}\]
where \(F\) is the number of filters. We seek to minimize the quantity above by gradient-based optimization on \(\mathbf{W}\), on a real-world dataset of audio signals \(\{\mathbf{x}_{1}\dots\mathbf{x}_{\mathbf{N}}\}\), and with no prior knowledge on \(\mathbf{\Lambda}\).
## 2 Neural Audio Models
### Learnable time-domain filterbanks (Conv1D)
As a baseline, we train a 1-D convnet \(\mathbf{\Phi}_{\mathbf{W}}\) with \(F\) kernels of the same length \(2L\). With a constant stride of \(2^{J}\), \(\mathbf{\Phi}_{\mathbf{W}}\mathbf{x}\) writes as
\[\mathbf{\Phi}_{\mathbf{W}}\mathbf{x}[f,t]=(\mathbf{x}*\mathbf{\phi}_{\mathbf{f}})[2^{J}t] =\sum_{\tau=-L}^{L-1}\mathbf{x}\big{[}2^{J}t-\tau\big{]}\mathbf{\phi}_{\mathbf{f}}[\tau], \tag{2}\]
where \(\mathbf{x}\) is padded by \(L\) samples at both ends. Under this setting, the trainable weights \(\mathbf{W}\) are the finite impulse responses of \(\mathbf{\phi}_{\mathbf{f}}\) for all \(f\), thus amounting to \(2LF\) parameters.We initialize \(\mathbf{W}\) as Gaussian i.i.d. entries with null mean and variance \(1/\sqrt{F}\).
### Gabor 1-D convolutions (Gabor1D)
As a representative of the state of the art (i.e., LEAF [14]), we train a Gabor filtering layer or Gabor1D for short. For this purpose, we parametrize each FIR filter \(\mathbf{\phi}_{\mathbf{f}}\) as Gabor filter; i.e., an exponential sine wave of amplitude \(a_{f}\) and frequency \(\eta_{f}\) which is modulated by a Gaussian envelope of width \(\sigma_{f}\). Hence a new definition:
\[\mathbf{\phi}_{\mathbf{f}}[\tau]=\frac{a_{f}}{\sqrt{2\pi}\sigma_{f}}\exp\left(-\frac {\tau^{2}}{2\sigma_{f}^{2}}\right)\exp(2\pi\mathrm{i}\eta_{f}\tau). \tag{3}\]
Under this setting, the trainable weights \(\mathbf{W}\) amount to only \(3F\) parameters: \(\mathbf{W}=\{a_{1},\sigma_{1},\eta_{1},\dots,a_{F},\sigma_{F},\eta_{F}\}\). Following LEAF, we initialize center frequencies \(\eta_{f}\) and bandwidths \(\sigma_{f}\) so as to form a mel-frequency filterbank [17] and set amplitudes \(a_{f}\) to one. We use the implementation of Gabor1D from SpeechBrain v0.5.14 [18].
### Multiresolution neural network (MuReNN)
As our original contribution, we train a multiresolution neural network, or MuReNN for short. MuReNN comprises two stages, multiresolution approximation (MRA) and convnet; of which only the latter is learned from data. We implement the MRA with a dual-tree complex wavelet transform (DTCWT) [19]. The DTCWT relies on a multirate filterbank in which each wavelet \(\mathbf{\psi}_{j}\) has a null average and a bandwidth of one octave. Denoting by \(\xi\) the sampling rate of \(\mathbf{x}\), the wavelet \(\mathbf{\psi}_{\mathbf{j}}\) has a bandwidth with cutoff frequencies \(2^{-(j+1)}\pi\) and \(2^{-j}\pi\). Hence, we may subsample the result of the convolution \((\mathbf{x}*\mathbf{\psi}_{\mathbf{j}})\) by a factor of \(2^{j}\), yielding:
\[\forall j\in\{0,\dots,J-1\},\ \mathbf{x}_{\mathbf{j}}[t]=(\mathbf{x}*\mathbf{\psi}_{\mathbf{j}})[2^{j}t], \tag{4}\]
where \(J\) is the number of multiresolution levels. We take \(J=9\) in this paper, which roughly coincides with the number of octaves in the hearing range of humans. The second stage in MuReNN consists in defining convnet filters \(\mathbf{\phi}_{\mathbf{f}}\). Unlike in the Conv1D setting, those filters do not operate over the full-resolution input \(\mathbf{x}\) but over one of its MRA levels \(\mathbf{x}_{\mathbf{j}}\). More precisely, let us denote by \(j[f]\) the decomposition level assigned to filter \(f\), and by \(2L_{j}\) the kernel size for that decomposition level. We convolve \(\mathbf{x}_{j[f]}\) with \(\mathbf{\phi}_{\mathbf{f}}\) and apply a subsampling factor of \(2^{J-j[f]}\), hence:
\[\mathbf{\Phi}_{\mathbf{W}}\mathbf{x}[f,t] =(\mathbf{x}_{\mathbf{j}[\mathbf{f}]}*\mathbf{\phi}_{\mathbf{f}})[2^{J-j[f]}t]\] \[=\sum_{\tau=-L_{j}}^{L_{j}-1}\mathbf{x}_{\mathbf{j}[\mathbf{f}]}[2^{J-j[f]}t- \tau]\mathbf{\phi}_{\mathbf{f}}[\tau] \tag{5}\]
The two stages of subsampling in Equations 4 and 5 result in a uniform downsampling factor of \(2^{J}\) for \(\mathbf{\Phi}_{\mathbf{W}}\mathbf{x}\). Each learned FIR filter \(\mathbf{\Phi}_{\mathbf{f}}\) has an effective receptive field size of \(2^{j[f]+1}L_{j[f]}\), thanks to the subsampling operation in Equation 4. This resembles a dilated convolution [20] with a dilation factor of \(2^{j[f]}\), except that the DTCWT guarantees the absence of aliasing artifacts.
Besides this gain in frugality, as measured by parameter count per unit of time, the resort to an MRA offers the opportunity to introduce desirable mathematical properties in the non-learned part of the transform (namely, \(\mathbf{\psi}_{\mathbf{f}}\)) and have the MuReNN operator \(\mathbf{\Phi}_{\mathbf{W}}\) inherit them, without need for a non-random initialization nor regularization during training. In particular, \(\mathbf{\Phi}_{\mathbf{W}}\) has at least as many vanishing moments as \(\mathbf{\psi}_{f}\). Furthermore, the DTCWT yields quasi-analytic coefficients: for each \(j\), \(\mathbf{x}_{j}=\mathbf{x}_{j}^{\mathrm{g}}+\mathrm{i}\mathbf{x}_{j}^{\mathrm{l}}\) with \(\mathbf{x}_{j}^{\mathrm{l}}\approx\mathcal{H}\left(\mathbf{x}_{j}^{\mathrm{g}}\right)\), where the exponent \(\mathbb{R}\) (resp. \(\mathbb{I}\)) denotes the real part (resp. imaginary part) and \(\mathcal{H}\) denotes the Hilbert transform. Since \(\mathbf{\phi}_{\mathbf{f}}\) is real-valued, the same property holds for MuReNN: \(\mathbf{\Phi}^{\dagger}\mathbf{x}=\mathcal{H}(\mathbf{\Phi}^{\mathrm{R}}\mathbf{x})\).
We implement MuReNN on GPU via a custom implementation of DTCWT in PyTorch1. Following [19], we use a biorthogonal wavelet for \(j=0\) and quarter-shift wavelets for \(j\geq 1\). We set \(L_{j}=8M_{j}\) where \(M_{j}\) is the number of filters \(f\) at resolution \(j\). We refer to [21] for an introduction to deep learning in the wavelet domain, with applications to image classification.
Footnote 1: [https://github.com/kymatio/murenn](https://github.com/kymatio/murenn)
* A constant-\(Q\) filterbank with \(Q=8\) filters per octave, covering eight octaves with Hann-modulated sine waves.
* A filterbank with 4-th order Gammatone filters tuned to the ERB-scale, a frequency scale which is adapted to the equivalent rectangular bandwidths of the human cochlea [22]. In psychoacoustics, Gammatone filters provide a good approximation to measured responses of the filters of the human basilar membrane [3]. Unlike Gabor filters, Gammatone filters are asymmetric, both in the time domain and frequency domain.We refer to [23] for implementation details.
* A variable-\(Q\) transform (VQT) with \(M_{j}=12\) frequency bins per octave at every level. The VQT is a variant of the constant-\(Q\) transform (CQT) in which \(Q\) is decreased gradually towards lower frequencies [24], hence an improved temporal resolution at the expense of frequency resolution.
* A third-octave filterbank inspired by the ANSI S1.11-2004 standard for environmental noise monitoring [25]. In this filterbank, center frequencies are not exactly in a geometric progression. Rather, they are aligned with integer Hertz values: 40, 50, 60; 80, 100, 120; 160, 200, 240; and so forth.
We construct the Synth teacher via nnAudio [26], a PyTorch port of librosa [27]; and Speech, Music, and Urban using the Large Time-Frequency Analysis Toolbox (LTFAT) for MATLAB [28].
### Gradient-based optimization
For all four "student" models, we initialize the vector \(\mathbf{W}\) at random and update it iteratively by empirical risk minimization over the training set. We rely on the Adam algorithm for stochastic optimization with default momentum parameters. Given the definition of spectrogram-based cosine distance in Equation 1, we perform reverse-mode automatic differentiation in PyTorch to obtain
\[\boldsymbol{\nabla}\mathcal{L}_{\boldsymbol{x}}(\mathbf{W})[i]= \sum_{f=1}^{F}\sum_{t=1}^{T/2^{f}}\frac{\partial|\widetilde{\boldsymbol{ \Phi}}_{\mathbf{W}}\boldsymbol{x}|^{2}[f,t]}{\partial\mathbf{W}[i]}(\mathbf{W})\] \[\times\big{(}|\widetilde{\boldsymbol{\Phi}}_{\mathbf{W}} \boldsymbol{x}|^{2}[f,t]-\widetilde{\mathbf{Y}}[f,t]\big{)} \tag{6}\]
for each entry \(\mathbf{W}[i]\). Note that the gradient above does not involve the phases of the teacher filterbank \(\mathbf{A}\), only its normalized magnitude response \(\mathbf{Y}\) given the input \(\boldsymbol{x}\). Consequently, even though our models \(\boldsymbol{\Phi}_{\mathbf{W}}\) contain a single linear layer, the associated knowledge distillation procedure is nonconvex, and thus resembles the training of a deep neural network.
## 4 Results and discussion
### Datasets
* As a proof of concept, we construct sine waves in a geometric progression over the frequency range of the target filterbank.
* The North Texas vowel database (NTVOW) [29] contains utterances of 12 English vowels from 50 American speakers, including children aged three to seven as well as male and female adults. In total, it consists of 3190 recordings, each lasting between one and three seconds.
* The TinySOL dataset [30] contains isolated musical notes played by eight instruments: accordion, alto saxophone, bassoon, flute, harp, trumpet in C, and cello. For each of these instruments, we take all available pitches in the tessitura (min = \(B_{0}\), median = \(E_{4}\), max = \(C_{8}^{\star}\) ) in three levels of intensity dynamics: _pp_, _mf_, and _ff_. This results in a total of 1212 audio recordings.
* The SONYC Urban Sound Tagging dataset (SONYC-UST) [31] contains 2803 acoustic scenes from a network of autonomous sensors in New York City. Each of these ten-second scenes contains one or several sources of urban noise pollution, such as: engines, machinery and non-machinery impacts, powered saws, alert signals, and dog barks.
\begin{table}
\begin{tabular}{l l l||c c c} Domain & Dataset & Teacher & Conv1D & Gabor1D & MuReNN \\ \hline Speech & NTVOW & Gammatone & \(2.12\pm 0.05\) & \(10.14\pm 0.09\) & \(\mathbf{2.00\pm 0.02}\) \\ Music & TinySOL & VQT & \(8.76\pm 0.2\) & \(16.87\pm 0.06\) & \(\mathbf{5.28\pm 0.03}\) \\ Urban & SONYC-UST & ANSI S1.11 & \(3.26\pm 0.1\) & \(13.51\pm 0.2\) & \(\mathbf{2.57\pm 0.2}\) \\ Synth & Sine waves & CQT & \(11.54\pm 0.5\) & \(22.26\pm 0.9\) & \(\mathbf{9.75\pm 0.4}\) \\ \end{tabular}
\end{table}
Table 1: Mean and standard deviation of test loss after knowledge distillation over five independent trials. Each column corresponds to a different neural audio model \(\boldsymbol{\Phi}_{\mathbf{W}}\) while each row corresponds to a different auditory filterbank and audio domain. See Section 4.2 for details.
Figure 2: Left to right: evolution of validation losses on different domains with Conv1D (green), Gabor1D (blue), and MuReNN (orange), as a function of training epochs. The shaded area denotes the standard deviation across five independent trials. See Section 4.2 for details.
### Benchmarks
For each audio domain, we randomly split its corresponding dataset into training, testing and validation subsets with a 8:1:1 ratio. During training, we select \(2^{12}\) time samples from the middle part of each signal, i.e., the FIR length of the filters in the teacher filterbank. We train each model with 100 epochs with an epoch size of 8000.
Table 1 summarizes our findings. On all three benchmarks, we observe that MuReNN reaches state-of-the-art performance, as measured in terms of cosine distance with respect to the teacher filterbank after 100 epochs. The improvement with respect to Conv1D is most noticeable in the Synth benchmark and least noticeable in the Speech benchmark. Furthermore, Figure 2 indicates that Gabor1D barely trains at all: this observation is consistent with the sensitivity of LEAF with respect to initialization, as reported in [32]. We also notice that MuReNN trains faster than Conv1D on all benchmarks except for Urban, a phenomenon deserving further inquiry.
### Error analysis
The mel-scale initialization of Gabor1D filters and the inductive bias of MuReNN enabled by octave localization gives a starting advantage when learning filterbanks on log-based frequency scales, as used for the Gammatone and VQT filterbank. Expectedly, this advantage is absent with a teacher filterbank that does not follow a geometric progression of center frequencies, as it is the case in the ANSI scale. Figure 2 reflects these observations.
To examine the individual filters of each model, we take the speech domain as an example and obtain their learned impulse responses. Figure 3 visualizes chosen examples at different frequencies learned by each model together with the corresponding teacher Gammatone filters. In general, all models are able to fit the filter responses well. However, it is noticeable that the prescribed envelope for Gabor1D impedes it from learning the asymmetric target Gammatone filters. This becomes prominent especially at high frequencies. From the strong envelope mismatches at coinciding frequency we may deduce that center frequencies and bandwidths did not play well together during training. On the contrary, MuReNN and Conv1D are flexible enough to learn asymmetric temporal envelopes without compromising its regularity in time. Although the learned filters of Conv1D are capable of fitting the frequencies well, they suffer from noisy artifacts, especially outside their essential supports. Indeed, through limiting the scale and support of the learned filters, MuReNN restrains the potential introduction of high-frequency noises of a learned filter of longer length. The phase misalignment at low frequencies is a natural consequence of the fact that the gradients are computed from the magnitudes of the filterbank responses.
Finally, we measure the time-frequency localization of all filters by computing the associated Heisenberg time-frequency ratios [33]. From theory we know that Gaussian windows are optimal in this sense [34]. Therefore, it is not surprising that Gabor1D yields the best localized filters, even outperforming the teacher, see Figure 4. Expectedly, the localization of the filters from Conv1D is poor and appears independent of the teacher. MuReNN roughly resembles the localization of the teachers but has some poorly localized outliers in higher frequencies, deserving further inquiry.
## 5 Conclusion
Multiresolution neural networks (MuReNN) have the potential to advance waveform-based deep learning. They offer a flexible and data-driven procedure for learning filters which are "wavelet-like": i.e., narrowband with compact support, vanishing moments, and quasi-Hilbert analyticity. Those experiments based on knowledge distillation from three domains (speech, music, and urban sounds) illustrate the suitability of MuReNN for real-world applications. The main limitation of MuReNN lies in the need to specify a number of filters per octave \(M_{j}\), together with a kernel size \(L_{j}\). Still, a promising finding of our paper is that prior knowledge on \(M_{j}\) and \(L_{j}\) suffices to finely approximate non-Gabor auditory filterbanks, such as Gammatones on an ERB scale, from a random i.i.d. Gaussian initialization. Future work will evaluate MuReNN in conjunction with a deep neural network for sample-efficient audio classification.
## 6 Acknowledgment
V.L. thanks Fergal Cotter and Nick Kingsbury for maintaining the ddcwt and pytorch.wavelets libraries; LS2N and OAW staff for arranging research visits; and Neil Zeghidour for helpful discussions. D.H. thanks Clara Holloney for helping with the implementation of the filterbanks. V.L. and M.L. are supported by ANR MuReNN; D.H., by a DOC Fellowship of the Austrian Academy of Sciences (A 26355); P.B., by FWF projects LoFT (P 34624) and NoMASP (P 34922); and M.E., by WWTF project CHARMED (VRG12-009).
Figure 4: Distribution of Heisenberg time–frequency ratios for each teacher–student pair (lower is better). See Section 4.3 for details.
Figure 3: Compared impulse responses of Conv1D (left), Gabor1D (center), and MuReNN (right) with different center frequencies after convergence, with a Gammatone filterbank as target. Solid blue (resp. dashed red) lines denote the real part of the impulse responses of the learned filters (resp. target). See Section 4.3 for details. |
2302.13741 | Hulk: Graph Neural Networks for Optimizing Regionally Distributed
Computing Systems | Large deep learning models have shown great potential for delivering
exceptional results in various applications. However, the training process can
be incredibly challenging due to the models' vast parameter sizes, often
consisting of hundreds of billions of parameters. Common distributed training
methods, such as data parallelism, tensor parallelism, and pipeline
parallelism, demand significant data communication throughout the process,
leading to prolonged wait times for some machines in physically distant
distributed systems. To address this issue, we propose a novel solution called
Hulk, which utilizes a modified graph neural network to optimize distributed
computing systems. Hulk not only optimizes data communication efficiency
between different countries or even different regions within the same city, but
also provides optimal distributed deployment of models in parallel. For
example, it can place certain layers on a machine in a specific region or pass
specific parameters of a model to a machine in a particular location. By using
Hulk in experiments, we were able to improve the time efficiency of training
large deep learning models on distributed systems by more than 20\%. Our open
source collection of unlabeled data:https://github.com/DLYuanGod/Hulk. | Zhengqing Yuan, Huiwen Xue, Chao Zhang, Yongming Liu | 2023-02-27T13:06:58Z | http://arxiv.org/abs/2302.13741v2 | # Hulk: Graph Neural Networks for Optimizing Regionally Distributed Computing Systems
###### Abstract
Large deep learning models have shown great potential for delivering exceptional results in various applications. However, the training process can be incredibly challenging due to the models' vast parameter sizes, often consisting of hundreds of billions of parameters. Common distributed training methods, such as data parallelism, tensor parallelism, and pipeline parallelism, demand significant data communication throughout the process, leading to prolonged wait times for some machines in physically distant distributed systems. To address this issue, we propose a novel solution called **Hulk**, which utilizes a modified graph neural network to optimize distributed computing systems. Hulk not only optimizes data communication efficiency between different countries or even different regions within the same city, but also provides optimal distributed deployment of models in parallel. For example, it can place certain layers on a machine in a specific region or pass specific parameters of a model to a machine in a particular location. By using Hulk in experiments, we were able to improve the time efficiency of training large deep learning models on distributed systems by more than 20%. Our open source collection of unlabeled data:[https://github.com/DLYuanGod/Hulk](https://github.com/DLYuanGod/Hulk).
Keywords:optimize communication efficiency, distributed training, parallel deployment, time efficiency
## 1 Introduction
In recent years, there has been a trend of scaling up deep learning models, resulting in a more robust performance in specific domains. For instance, in the field of natural language processing, large-scale text data has been used to train deep learning models such as GPT-3 (175B) [2], T5 (11B) [19], and Megatron-LM (8.3B) [22], which have demonstrated impressive performance. However, training these models can be quite challenging. To solve the challenges posed
by large-scale deep learning models, optimization of distributed computing is crucial.
Model parallelism(MP) is a technique used to solve the problem of a model being too large to fit into the memory of a single GPU or TPU by distributing the model across multiple GPUs or TPUs. However, this approach may introduce communication challenges between GPUs or TPUs during training. On the other hand, data parallelism(DP) can improve time utilization by addressing the batch size issue during training, but it cannot resolve the problem of a model being too large for a single GPU or TPU's memory capacity.
While DP and model MP have been effective in mitigating communication volume issues in recent years, such as large minibatch SGD [9], Megatron-LM [22], Gpipe [12], and Pathway [1] the challenge of scheduling distributed training across machines in different regions remains unsolved. If a model like GPT-3 with hundreds of billions of parameters exceeds the memory capacity of GPUs in the current region during training, it becomes necessary to schedule machines from other regions to complete the training. This will pose several challenges:
* Communication latency can be very high when training is distributed across machines in different regions.
* How can tasks be effectively allocated to different machines, such as assigning specific machines to maintain certain layers of the model's parameters (e.g., Machine 0 is responsible for Layer X) or designating machines to process specific data (e.g., Machine 2 handles Data Set Y)?
* How can we address the issue of disaster recovery in training, such as handling scenarios where a machine fails during the process?
* If you need to train not only a single task but also multiple tasks simultaneously, such as training both a GPT-3 and a GPT-2 model, how can you provide for these tasks?
To elaborate on the first point, we collected all communication logs between the three machines and the eight servers over a three-month period. Our statistics reveal the communication time for every 64 bytes, as presented in Table 1. As observed in the table, the communication latency between certain nodes is high or even unfeasible. Here, the problem of communication time is difficult to solve in a distributed system without optimization.
\begin{table}
\begin{tabular}{l r r r r r r r r} \hline
**Regions** & \multicolumn{6}{c}{**Communication time to send 64 bytes (ms)**} \\ \hline & \multicolumn{3}{c}{California Tokyo} & \multicolumn{3}{c}{Berlin London} & \multicolumn{3}{c}{New Delhi Paris} & \multicolumn{1}{c}{Rome} & \multicolumn{1}{c}{Brasilia} \\ Beijing, China & 89.1 & 74.3 & 250.5 & 229.8 & 341.9 & - & 296.0 & 341.8 \\ Nanjing, China & 97.9 & 173.8 & 213.7 & 176.7 & 236.3 & 265.1 & 741.3 & 351.3 \\ California, USA & 1 & 118.8 & 144.8 & 132.3 & 197.0 & 133.9 & 158.6 & 158.6 \\ \hline \end{tabular}
\end{table}
Table 1: We measured the time it takes for our machines in three different regions to send and receive 10 words, using eight servers, and calculated the average.
### Contributions
Graph data structures have been widely adopted since their introduction, as they can effectively represent interconnected structures such as social networks and knowledge graphs. Considering the tremendous success of graph neural networks [7, 14, 26] in recent years, we aim to leverage this powerful capability in real-world industrial systems. With the powerful representational capability of graphs, it becomes easier to model the relevant optimization problems described in our paper. Our design choices were influenced by the types of workloads observed in actual systems. Hulk has the following features:
Efficient Inter-node CommunicationOur system minimizes the impact of communication latency between machines, ensuring that each machine is assigned the appropriate task.
Global OptimalityOur model is built upon graph convolutional neural networks (GCNs) [14, 25] to extract features from the entire graph, enabling the selection of a globally optimal solution.
Disaster RecoverySince GCNs are utilized to assign tasks to different machines in the system, it becomes evident which tasks each machine is responsible for. Furthermore, in the event of a machine failure, the system can quickly recover the entire computation.
ScalabilityIf a particular machine or machines are no longer needed, you can simply remove the corresponding edge information from the graph structure.
The novelty of the proposed system lies in the utilization of graph neural networks for optimizing machine learning systems. By relying on the neural network's output values and some algorithms, the scheduling problem of the entire system can be efficiently solved.
### Engineering Challenges
Although graph neural networks are capable of addressing tasks such as node classification [14, 23, 24], link prediction [29, 15, 21], and graph classification [14, 28], there is currently no suitable task that can be directly applied to our system. How to construct a suitable loss function is a crucial problem that cannot be overlooked. Regarding the representation of optimization features, such as computation time and communication time, in the graph data structure, there are also challenges that need to be addressed.
## 2 Background
This section provides a brief introduction to machine learning systems and graph neural networks.
### Machine Learning Systems
This subsection provides a brief overview of the evolution of machine learning systems.
#### 2.1.1 Data Parallelism
DP [5] is a commonly used technique in distributed training for deep neural networks, where the data is split into multiple copies and distributed to different machines for computation. Each machine calculates the loss and gradient of its assigned data and aggregates these gradients into a parameter server, which updates the model parameters. This method enables multiple machines to process large data sets in parallel, resulting in faster training speeds.
#### 2.1.2 Parameter Server
The parameter server is a distributed deep learning training method proposed by Mu Li et al. [16] that addresses the communication bottleneck problem in training large-scale deep learning models. It achieves this by placing the gradient aggregation and parameter updating process on the server side, and the computational nodes only need to send the locally computed gradient information to the server. This approach reduces communication overhead and improves training efficiency.
#### 2.1.3 Megatron-LM
Megatron-LM [22] combines model parallelism and data parallelism by dividing the model parameters into multiple parts, each trained on a different GPU. This allows for larger models to be used as each GPU only needs to focus on computing a part of the model using model parallelism. Data parallelism is used to assign different batches to different GPUs for processing, which improves training efficiency.
The training objective of Megatron-LM is to minimize the negative log-likelihood of the target sequence given the input sequence, which is expressed as:
\[L(\theta)=-\sum_{t=1}^{T}\log P(y_{t}|y_{<t},x;\theta)\]
where \(T\) is the length of the sequence, \(y_{t}\) is the target token at time step \(t\), \(y_{<t}\) are the tokens before time step \(t\), \(x\) is the input sequence, and \(\theta\) represents the model parameters.
#### 2.1.4 Gpipe
In Gpipe [12], the model is split into sub-models, each assigned to a different GPU. DP concatenates Micro-batches along the pipeline to pass data and gradients between GPUs, enabling pipeline parallelism [4]. The training process in Gpipe can be expressed as the following equation:
\[\Delta W_{i,j}=\eta\sum_{k=1}^{K}(\nabla_{W_{i,j}}L(f^{i,j}(x_{k}^{i,j}),y_{k}^ {i,j})+\sum_{l=j+1}^{M}\nabla_{W_{i,l}}L(f^{i,l}(x_{k}^{i,l}),y_{k}^{i,l}))\]
where \(W_{i,j}\) denotes the weight parameter of the \(j\)th layer of the \(i\)th submodel, \(\Delta W_{i,j}\) denotes the corresponding parameter update, \(\eta\) denotes the learning rate, \(K\) denotes the number of Micro-batches, \(f^{i,j}\) denotes the forward propagation function of the \(j\)th layer of the \(i\)th submodel, \(x_{k}^{i,j}\) denotes the \(k\)th Micro-batch of the \(j\)th layer in the \(i\)th sub-model, \(y_{k}^{i,j}\) denotes the label of the \(k\)th Micro-batch.
### Graph Neural Networks
Graph Neural Networks (GNNs) [20, 31, 30, 3, 11] are a type of neural network designed to work on graph-structured data, where nodes represent entities and edges represent relationships between them. They have become popular in recent years due to their ability to capture complex relationships and patterns in data, making them useful for tasks such as node classification, link prediction, and graph classification.
### Graph Convolutional Networks
Graph Convolutional Networks (GCNs) [14] are a type of deep learning model designed to work on graph-structured data. They use convolutional operations to aggregate information from neighboring nodes and update node representations. The key formulas for GCNs include the graph convolution operation, which calculates the node representation updates, and the graph pooling operation, which aggregates information across multiple nodes.
\[\mathbf{v}^{(l+1)}=\sigma\left(\sum_{u\in\mathcal{N}(v)}\frac{1}{c_{u,v}}W^{( l)}\mathbf{u}^{(l)}\right) \tag{1}\]
where \(\mathbf{v}^{(l)}\) represents the feature representation of node \(v\) at layer \(l\), \(\mathcal{N}(v)\) denotes the set of neighbors of node \(v\), \(W^{(l)}\) is the weight matrix at layer \(l\), \(\sigma\) is the activation function, and \(c_{u,v}\) is a normalization factor that depends on the number of neighbors of node \(u\) and \(v\). This formula is used to iteratively compute the feature representations of nodes in a graph using neighborhood information.
## 3 Data Representation
To better address the issues raised in Section 1, it is important to select an appropriate data structure to represent the system parameters.We adopt a graph-based data structure to represent our system parameters, with each node (denoted as \(v\)) representing a machine in a different region. Each node has unique features that include its geographic location, computational capacity, and GPU memory. The edges (denoted as \(e\)) between nodes denote the possibility of communication between the two connected machines, with the weight of each edge representing the time in milliseconds required to transmit each 64-byte message.
As depicted in Figure 1, we randomly selected eight machines to construct a graph, where the edge weight represents the communication time, and the node features are embedded in the corresponding vector space.
For example, node 0 can be represented as \(v_{0}=\{^{\prime}Beijing^{\prime},8.6,152\}\). Then we embed the node information using the following formula:
\[\mathbf{v}^{(0)}=\mathbf{x}_{v} \tag{2}\]
where \(\mathbf{v}^{(0)}\) denotes the initial feature vector of node \(v\) and \(\mathbf{x}_{v}\) denotes the input feature vector of node \(v\).
The node-to-node edges we represent by the adjacency matrix. The weight of an edge in the adjacency matrix is equal to the communication time between two corresponding nodes. The values for unconnected edges are set to 0, and the diagonal values in this matrix are all 0. Similarly, we then perform the edge information embedding with the following equation:
\[e_{vu}=g\left(\mathbf{e}_{vu},\mathbf{u},\mathbf{v},\boldsymbol{\Theta}_{e}\right) \tag{3}\]
where \(e_{vu}\) denotes the edge feature between node \(v\) and node \(u\), \(\mathbf{e}_{vu}\) is the feature vector of edge \(vu\), \(\mathbf{u}\) and \(\mathbf{v}\) are the feature vectors of node \(u\) and node \(v\), respectively, \(g\) is a learnable function and \(\boldsymbol{\Theta}_{e}\) is its argument. We then sparsely label this subgraph to enable the neural network to learn the contents of the graph in a supervised manner.
## 4 Methods
The typical tasks of graph neural networks, such as node classification, do not utilize edge information and only leverage the graph topology. In real-world
Figure 1: In this figure, the graph topology is visualized on the left, while the characteristics of each node are indicated on the right. Where computing power is determined based on Nvidia’s official website6, and memory refers to the total memory across all GPUs on each machine.
cases, the information carried by edges is often crucial, such as edge weights and directed edges. To incorporate edge information into nodes, we aim to perform edge pooling, which involves aggregating or pooling edges of neighboring nodes at each node to create a unified node representation that contains edge information. This is expressed in the following equation:
\[\mathbf{v}^{(l+1)}=\sigma\left(\sum_{u\in\mathcal{N}(v)}f(\mathbf{v}^{(l)}, \mathbf{u}^{(l)},e_{vu})\right) \tag{4}\]
Where \(\mathbf{v}^{(l+1)}\) represents the feature vector of node \(v\) in layer \(l+1\), \(\sigma\) is the activation function, \(\mathcal{N}(v)\) denotes the set of neighboring nodes of node \(v\), \(\mathbf{u}^{(l)}\) represents the feature vector of node \(u\) in layer \(l\), and \(f\) is a learnable function used to merge features of nodes and edges into new features of node \(v\).
As depicted in Figure 2, this is the first layer of the constructed network structure(\(l=0\)) that enables nodes to encode edge information.
Figure 3: The transformed graph data are entered into GCNs for forward propagation.
Figure 2: The edge pooling operation of the above figure 1. where \(U\) represents the information of the whole graph and \(f\) is the respective linear layer.
After the edge features are embedded into node features, we can use the resulting transformed graph as input for a standard node classification task and train it using a graph convolutional neural network or graph attention network. As shown in Equation 1. If we want to build N-layer GCNs with our \(l=2,3,4\cdots N+1\).
As shown in Figure 3, Y represents the category of the classification, i.e., what tasks are appropriate.
Then we calculate its loss using the cross-entropy loss function [8]:
\[\mathcal{L}=-\sum_{i=1}^{|\mathcal{Y}|}Y_{i}\log\hat{Y}_{i} \tag{5}\]
Here, \(\mathcal{Y}\) denotes the set of all labels, \(Y_{i}\) denotes the true label of node \(i\), and \(\hat{Y}_{i}\) denotes the predicted label of node \(i\). Then back propagation is performed to update the network parameters.
As depicted in Figure 4, we observed that the accuracy peaked at 99% during the sixth training step.
## 5 Structure
In this section, we build our system based on the GCNs trained in the previous section 4 and solve the problem presented in section 1.
### Efficiency
We now have two tasks to perform. The first involves training the BERT-large model [6], while the second involves training the GPT-2 model [18]. As the largest
Figure 4: Loss rate and accuracy line charts for 10 steps of training on this data. The parameters of GCNs are 188k and the learning rate is 0.01.
GPT-2 model (1.5B parameters) is significantly larger than BERT-large (340M parameters), it is important to carefully allocate tasks to each machine in a sensible manner. The ratio of the number of parameters in GPT-2's largest model (1.5B) to BERT-large (340M) is approximately 4.4:1. Based on this information, we instruct the graph neural network to classify the classes according to this scale and optimize the communication time within each class. Also, we need to consider the memory and computing power characteristics of each machine.
```
0: Graph Data \(G_{1}\), Trained Graph Neural Network \(F\), Number of Tasks \(N\), Minimum Memory Threshold \(M_{n}\) for Each Task
0: Task Assignments for Each Graph Data
1:\(C\gets 0\)
2:if\(G_{1}\) does not meet the requirements of all tasks then
3: Jump out of the algorithm and report an error.
4:endif
5:for i in range(1, N) do
6:\(G_{i},G_{i+1}\gets F(G_{i})\)
7: Assign the smaller graph \(G_{i}\) to a task with the appropriate minimum memory threshold \(M_{n}\)
8:if\(G_{i}\) does not meet the requirements of the all task then
9:\(C\gets i\) and Continue
10:if\(C>=1\)then
11:\(G_{i}\gets G_{i}+G_{C}\)
12: Assign the smaller graph \(G_{i}\) to a task with the appropriate minimum memory threshold \(M_{n}\)
13:\(C\gets 0\)
14:endif
15:endif
16:if\(G_{i+1}\) does not meet the requirements of the all task then
17: Break and Provide a prompt and wait for other tasks to complete before proceeding with training.
18:endif
19:endfor
```
**Algorithm 1** Task Assignments
We use Algorithm 1 to schedule multiple tasks, but it can also be used to determine superiority if there is only one task. Based on the computational power, memory and communication efficiency features, as well as the integration into node information, we only need to determine whether it is appropriate.
Figure 5 demonstrates that the basic graph neural network is capable of carrying out classification tasks effectively and emulating human thought processes.
### Scalability
If we need to add one or more machines to this system, we can simply define their \(\{City,ComputeCapability,Memory\}\) and connect them to the existing nodes that can communicate with them using weights.
As shown in Figure 6, the machine with id \(45\{Rome,7,384\}\) in the dataset was added to the Hulk system and still works fine.
## 6 Experimentation and Evaluation
In this section, we test the Hulk system using multiple deep learning tasks in real industries with 46 high-performance GPU servers.
### Experimental Setting
We have a total of 46 servers distributed across different countries and regions, with a combined total of 368 GPUs of various models such as NVIDIA A100,
Figure 5: The data in Figure 1 are grouped using Algorithm 1. The left panel is the training group of GPT-2 and the right panel is the Bert-large training group.
Figure 6: Join the machine with id 45 and make assignments.
NVIDIA A40, NVIDIA V100, RTX A5000, GeForce GTX 1080Ti, GeForce RTX 3090, and NVIDIA TITAN Xp. And, we calculated the average of 10 communications between these machines over a 3-month period. Due to network policy restrictions in different countries, there are certain machines that are unable to communicate with each other. We adopt the parameter settings provided in the original paper for the training process.
### Data Building
We use networkx [10] library to build our graph structure data and visualize it as shown in Figure 7. Additionally, we need to read the adjacency matrix of this data and consider the corresponding feature embedding representation.
### Task Assignment
The four tasks we aim to train in this system are OPT (175B) [13], T5 (11B), GPT-2 (1.5B), and BERT-large (350M).
We need to classify all nodes into four distinct classes based on their characteristics and then deploy distributed algorithms tailored to each class.
\begin{table}
\begin{tabular}{c c} \hline
**Model** & **Nodes** \\ \hline OPT (175B) 0, 1, 2, 3, 4, 20, 21, 22, 23, 24, 27, 28, 29, 30, 31 \\ T5 & 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 \\ GPT-2 & 15, 16, 17, 18, 19, 25, 26, 32, 33, 34 \\ BERT-large & 35, 36, 37, 38 \\ \hline \end{tabular}
\end{table}
Table 2: Model Node Allocation
Figure 7: 46 servers construct the graph structure data.
As presented in Table 2, we feed the graph data into the graph neural network, which was trained in Section 4 and employs Algorithm 1, to derive node classification information. To handle the nodes in each class with different computational performance and memory, we utilize Gpipe to train the model in parallel. Depending on the computational power and memory of each node, we determine which part of the model it will handle.
### Evaluation
To validate the performance of the Hulk system, we have chosen three commonly used distributed computing algorithms for evaluation.
System AIt utilizes all available machines for training while discarding any machine that does not have sufficient memory to accommodate the entire model. It utilizes data parallelism to distribute the batch size across multiple machines, thereby enabling simultaneous training of the model on each machine.
System BIt utilizes Gpipe for parallelism, assigning a certain layer of the model to a particular machine until the entire model is distributed across all machines.
System CIt employs tensor parallelism with Megatron-LM across the entire system, requiring all machines to be utilized for model training.
ResultAs shown in Figure 8, the Hulk system can greatly reduce communication time and thus the overall training time. This illustrates that Hulk is effective in dividing the nodes into a specific model for training.
If we need to train 6 models, the parameters of each model are shown in Figure 9. Among them, the parameters of RoBERTa [17] are 355M and the parameters of XLNet [27] are 340M.
Figure 8: Communication time and calculation time on four models of the four the 4 systems.
ResultAs illustrated in Figure 10, when the system needs to handle multiple tasks, the gap in communication time becomes more apparent. Our Hulk system is able to effectively reduce communication time (Because the GPT-3 (175B) model is not open source, we use the OPT (175B) with equivalent parameters instead).
## 7 Conclusion
In this article, we introduce our novel solution, Hulk, which optimizes regionally distributed computer systems by tackling the challenges of scheduling distributed training across machines in different regions. Our real-world industrial solution, Hulk, utilizes graph neural networks with powerful representation capabilities to enhance communication efficiency between GPUs or TPUs across
Figure 10: Communication time and calculation time on six models of the four systems.
Figure 9: Language Model Parameters
different countries or regions during training. With its efficient communication, global availability, fast recovery, and excellent scalability, Hulk stands out as a powerful tool for optimizing regionally distributed computer systems. The results demonstrate a significant increase in the efficiency of distributed training, crucial for the success of large-scale deep learning models. Overall, the use of Hulk can streamline the model deployment process and benefit researchers and practitioners seeking to optimize communication efficiency.
## Acknowledgement
The authors gratefully acknowledge the support of the AIMTEEL 202201 Open Fund for Intelligent Mining Technology and Equipment Engineering Laboratory in Anhui Province and the Anhui Provincial Department of Education Scientific Research Key Project (Grant No. 2022AH050995). The financial assistance provided by these projects was instrumental in carrying out the research presented in this paper. We would like to thank all the members of the laboratory for their valuable support and assistance. Without their help, this research would not have been possible. Finally, we would like to express our gratitude to the Anhui Polytechnic University for providing the necessary facilities and resources for this study.
|
2304.01222 | NeuroDAVIS: A neural network model for data visualization | The task of dimensionality reduction and visualization of high-dimensional
datasets remains a challenging problem since long. Modern high-throughput
technologies produce newer high-dimensional datasets having multiple views with
relatively new data types. Visualization of these datasets require proper
methodology that can uncover hidden patterns in the data without affecting the
local and global structures within the data. To this end, however, very few
such methodology exist, which can realise this task. In this work, we have
introduced a novel unsupervised deep neural network model, called NeuroDAVIS,
for data visualization. NeuroDAVIS is capable of extracting important features
from the data, without assuming any data distribution, and visualize
effectively in lower dimension. It has been shown theoritically that
neighbourhood relationship of the data in high dimension remains preserved in
lower dimension. The performance of NeuroDAVIS has been evaluated on a wide
variety of synthetic and real high-dimensional datasets including numeric,
textual, image and biological data. NeuroDAVIS has been highly competitive
against both t-Distributed Stochastic Neighbor Embedding (t-SNE) and Uniform
Manifold Approximation and Projection (UMAP) with respect to visualization
quality, and preservation of data size, shape, and both local and global
structure. It has outperformed Fast interpolation-based t-SNE (Fit-SNE), a
variant of t-SNE, for most of the high-dimensional datasets as well. For the
biological datasets, besides t-SNE, UMAP and Fit-SNE, NeuroDAVIS has also
performed well compared to other state-of-the-art algorithms, like Potential of
Heat-diffusion for Affinity-based Trajectory Embedding (PHATE) and the siamese
neural network-based method, called IVIS. Downstream classification and
clustering analyses have also revealed favourable results for
NeuroDAVIS-generated embeddings. | Chayan Maitra, Dibyendu B. Seal, Rajat K. De | 2023-04-01T21:20:34Z | http://arxiv.org/abs/2304.01222v1 | # NeuroDAVIS: A neural network model for data visualization
###### Abstract
The task of dimensionality reduction and visualization of high-dimensional datasets remains a challenging problem since long. Modern high-throughput technologies produce newer high-dimensional datasets having multiple views with relatively new data types. Visualization of these datasets require proper methodology that can uncover hidden patterns in the data without affecting the local and global structures, and bring out the inherent non-linearity within the data. To this end, however, very few such methodology exist, which can realise this task. In this work, we have introduced a novel unsupervised deep neural network model, called NeuroDAVIS, for data visualization. NeuroDAVIS is capable of extracting important features from the data, without assuming any data distribution, and visualize effectively in lower dimension. It has been shown theoretically that neighbourhood relationship of the data in high dimension remains preserved in lower dimension. The performance of NeuroDAVIS has been evaluated on a wide variety of synthetic and real high-dimensional datasets including numeric, textual, image and biological data. NeuroDAVIS has been highly competitive against both t-Distributed Stochastic Neighbor Embedding (t-SNE) and Uniform Manifold Approximation and Projection (UMAP) with respect to visualization quality, and preservation of data size, shape, and both local and global structure. It has outperformed Fast interpolation-based t-SNE (Fit-SNE), a variant of t-SNE, for most of the high-dimensional datasets as well. For the biological datasets, besides t-SNE, UMAP and Fit-SNE, NeuroDAVIS has also performed well compared to other state-of-the-art algorithms, like Potential of Heat-diffusion for Affinity-based Trajectory Embedding (PHATE) and the siamese neural network-based method, called IVIS. Downstream classification and clustering analyses have also revealed favourable results for NeuroDAVIS-generated embeddings.
Deep learning, Unsupervised learning, Shape preservation, Global structure preservation, Single-cell transcriptomics
## 1 Introduction
Machine learning-based analyses of large real-world datasets are underpinned by dimensionality reduction (DR) methods which form the basis for preprocessing and visualization of these datasets. Some commonly used DR techniques include Principal Component Analysis (PCA) [1], Independent Component Analysis (ICA) [2], Multi-dimensional Scaling (MDS) [3], Isomap [4], NMF [5], SVD [6], t-Distributed Stochastic Neighbor Embedding (t-SNE) [7] and Uniform Manifold Approximation and Projection (UMAP) [8]. PCA projects the data into a newer space spanned by the vectors representing the maximum variance, while ICA extracts subcomponents from a multivariate signal. MDS is a DR and visualization technique that can extract dissimilarity within structures in the data. Isomap is a non-linear DR method that combines the advantages of several methods. Both NMF and SVD are methods for matrix factorization that have significant usage in topic of modeling and signal processing, t-SNE and UMAP, though
being techniques for DR, are mostly suited for visualization tasks. All the above methods fall into the category of unsupervised DR. Linear Discriminant Analysis (LDA) [9], on the contrary, is a supervised method used for DR and pattern recognition.
Any DR technique should virtue preservation of both local and global structures of the data. However, we are more accustomed to see methods preferring one over the other. PCA or MDS tend to preserve pairwise distances among all observations, while t-SNE, Isomap and UMAP tend to preserve local distances over global distances. There is limited study on local and global shape preservation for the other DR techniques mentioned above.
Non-linear projection algorithms like t-SNE, however, have an edge over linear algorithms like PCA in extracting complex latent structures within the data [10]. Nevertheless, there are severe downsides of t-SNE. t-SNE is highly sensitive to noise, and even randomly distributed points may be transformed into spurious clusters [11]. Moreover, t-SNE is unpopular for preserving local distances over global distances [12, 13], which hinders drawing realistic conclusions from t-SNE visualizations. Fast interpolation-based t-SNE (Fit-SNE) [14] is a variant of t-SNE, which provides accelerated performance on large datasets. However, tuning the parameters to obtain an optimal embedding requires high amount of expertise. UMAP, on the other hand, uses a manifold learning technique to reduce data dimension and scales to large datasets pretty well. However, UMAP assumes that there exists a manifold structure within the data which is not always realistic. It also gives precedence to local relationships over global relationships [8], like t-SNE. Potential of Heat-diffusion for Affinity-based Trajectory Embedding (PHATE) [15] is another manifold learning-based method which has found significant usage in the field of biology in recent days. However, the effectiveness of PHATE on other real-world datasets are yet to be assessed.
Neural network (NN)-based methods have also been in use as non-linear DR tools in almost every field, more so during the last decade [16, 17, 18]. Unsupervised NNs are trained to learn a non-linear function, while features extracted from an intermediate hidden layer with relatively low cardinality serve as a low-dimensional representation of the data. The earliest usage of NN methods for projection can be mapped back to Self-Organizing Map (SOM), also known as Kohonen map [19]. SOM is an unsupervised method that projects data into lower dimension while preserving topological structures. In recent years, Autoencoder (AE)-based methods predominantly rule the DR space [20, 21, 22] where multiple variations of AE have been developed to address the problem of 'Curse of Dimensionality' in several application domains including computer vision and computational biology. Most recently, a Siamese neural network architecture with triplet loss function, called IVIS [23], has been developed for data visualization of high-dimensional biological datasets. All these NN-based methods have shown high potentiality with respect to DR and visualization tasks. However, not all methods can simulataneously preserve local and global structures of the data.
Fascinated by the rich potentiality of NN models to capture data non-linearity, we have introduced, in this work, a novel unsupervised deep learning model, called NeuroDAVIS, which serves the purpose of data visualization while addressing the issue of both local and global structure preservation. There are several major contributions of this work. NeuroDAVIS is a general purpose NN model that can be used for dimension reduction and visualization of high-dimensional datasets. It can extract a meaningful embedding from the data, which captures significant features that are indicative of the inherent data non-linearity. NeuroDAVIS-extracted features can be used for data reconstruction as well. Moreover, NeuroDAVIS is free from assumptions about the data distribution. The performance of NeuroDAVIS has been evaluated on a wide variety of 2D synthetic, and real high-dimensional datasets including numeric, textual, image and biological data. Despite all the limitations of t-SNE and UMAP discussed above, they are still the current state-of-the-arts in the universe of dimension reduction or visualization methods. NeuroDAVIS has been competitive against both t-SNE and UMAP, and arguably preserves data shape, size, and local and global relationships better than both of them. It has been proved mathematically that the corresponding embeddings of local neighbours in high dimension remain local at low dimension too. NeuroDAVIS is applicable to all kinds (modalities) of datasets. Furthermore, it is able to produce impressive and interpretable visualizations independently, i.e., without any kind of preprocessing.
The remaining part of this article is organized into the following sections. Section 2 discusses the motivation behind NeuroDAVIS and develops its architecture, followed by the proof of correctness (Section 3) where mathematical justifications for some properties of NeuroDAVIS have been provided. Section 4 describes the experimental results, along with comparisons on 2D datasets (Section 4.1), their embedding, visualization and structure preservation (Section 4.1.1), global structure preservation (Section 4.1.2), inter-cluster distance preservation (Section 4.1.3), cluster-size preservation (Section 4.1.4), and finally results on high-dimensional datasets (Section 4.2) including numeric (Section 4.2.1), textual (Section 4.2.2), image (Section 4.2.3) and biological datasets (Section 4.2.4). Section 5 discusses the advantages and disadvantages of NeuroDAVIS, and concludes the article.
## 2 Methodology
This section develops the proposed unsupervised neural network model, called NeuroDAVIS, for visualization of high-dimensional datasets. NeuroDAVIS is a non-recurrent, feed-forward neural network, which is capable of extracting a low-dimensional embedding that can capture relevant features and provide efficient visualization of high-dimensional datasets. It can preserve both local and global relationships between clusters within the data. The motivation behind the proposed model is described below, followed by its architecture.
### Motivation
Preservation of local and global shapes are the two main components of a visualization challenge. Several approaches have been put forth so far, but none of them can suitably address both these issues. The majority of them try to preserve local shape rather than global shape, while some techniques accomplish the other. Thus, developing a method that visualises data at a lower dimension while successfully preserving both local and global distances, is our key goal.
The fundamental mathematical concept behind NeuroDAVIS is inspired by the well-studied regression problem. In a regression problem, a set of independent variables (regressors) is used to predict one or more dependent variables. Unlike regression, a visualization problem has only one data. In order to visualize the data in hand, a set of random regressors can be learnt in an unsupervised manner. This relaxation may make the regression task equivalent to a visualization task.
#### Visualization through regression
Let \(\mathbf{X}_{1}=\{\mathbf{x}_{1,i}:\mathbf{x}_{1,i}\in\mathbb{R}^{d_{1}}\}_{i=1 }^{n}\) and \(\mathbf{X}_{2}=\{\mathbf{x}_{2,i}:\mathbf{x}_{2,i}\in\mathbb{R}^{d_{2}}\}_{i=1 }^{n}\) be two datasets having \(n\) samples characterized by \(d_{1}\) and \(d_{2}\) features respectively. The task of regresesion is to realize a continuous function \(f\) from \(\mathbb{R}^{d_{1}}\) to \(\mathbb{R}^{d_{2}}\) such that the reconstruction loss (usually \(\sum_{i=1}^{n}\|f(\mathbf{x}_{1,i})-\mathbf{x}_{2,i}\|^{2}\)) gets minimized. For this purpose, one can use a multi-layer neural network model. To ensure that the input data fit into the model appropriately, the input layer should have \(d_{1}\) number of neurons. The input will then be processed through a number of nonlinear activation layers before being output at the output layer, which has \(d_{2}\) number of neurons. A reconstruction loss is then determined using the expected output, and the weights and biases are updated using conventional back-propagation learning. The crucial fact about this kind of learning is that it does not support updation of the regressors. As a result, the multi-layer neural network learns a sufficiently complex function which is able to produce close predictions.
In the proposed methodology, random 2D or 3D data are prepared initially to regress the original data. Now, there are multiple objectives with some relaxations. Throughout the learning process, not only an appropriate continuous function that can effectively produce the data, has to be learnt but the regressors too. Continuity property of the
Figure 1: Block diagrams of (A) Multi-layer neural network and (B) NeuroDAVIS. The dashed part in (B) represents the modified architecture that enables the regressors to get updated.
learned function and its less complexity ensure that local neighbours in high-dimension are preserved on projection into low dimension (See Theorem 1 in Section 3).
As shown in the block diagram of NeuroDAVIS (Figure 1B), the dashed part is used to control the regressors. Input to NeuroDAVIS is an identity matrix of size \(n\times n\), where \(n\) is the number of samples. During the forward pass, each column of identity matrix is fed to the input layer, thus ensuring that only certain weight values (initialized randomly) are fed as input to the latent layer. These random points generated using the first two layers are used as regressors. Similar to a multi-layer neural network, NeuroDAVIS also tries to reconstruct the data at the reconstruction layer. A reconstruction loss is then calculated and the corresponding weights and biases are updated using standard back-propagation algorithm. Thus, in every epoch, as the weight values in the first layer get updated, the regressors get modified too, making them equally distant as their corrresponding predictions. The detailed learning process has been explained in Section 2.4.
### Architecture
The architecture of NeuroDAVIS represents a novel neural network model as shown in Figure 2. The NeuroDAVIS network consists of different types of layers, viz., an _Input layer_, a _Latent layer_, one or more _Hidden layer(s)_ and a _Reconstruction layer_. Let \(\mathbf{X}=\{\mathbf{x}_{i}:\mathbf{x}_{i}\in\mathbb{R}^{d}\}_{i=1}^{n}\) be a dataset comprising \(n\) samples characterized by \(d\) features. Then, the _Input layer_ in NeuroDAVIS consists of \(n\) neurons. The number of neurons in the _Latent layer_ is \(k\), where \(k\) represents the number of dimensions to be used for visualization. In other words, \(k\) is the number of dimensions at which the low-dimensional embedding is to be extracted. Usually, in real-life applications, a 2-dimensional or 3-dimensional visualization is possible. The _Input layer_ helps to create a random low-dimensional embedding of \(n\) samples (observations) at the _Latent layer_, which can be considered as an initial representative of the \(n\) original observations. The low-dimensional embedding at the _Latent layer_ is projected onto the _Reconstruction layer_ through one or more _Hidden layer(s)_. The _Reconstruction layer_ tries to reconstruct the original \(d\) dimensional space for the \(n\) samples from their random low-dimensional embedding. The number of neurons in the _Reconstruction layer_ is thus \(d\). In this work, we have used only two hidden layers. A _Hidden layer_ tries to capture the non-linearity in the data and pass on the knowledge to the next layer in sequence. Thus, one can use multiple _Hidden layers_ based on the complexity of the data. The number of _Hidden layers_, however, should not be large enough in order to avoid overfitting.
### Forward propagation
As mentioned before, we have considered a set of \(n\) samples \(\mathbf{X}=\{\mathbf{x}_{i}:\mathbf{x}_{i}\in\mathbb{R}^{d}\}_{i=1}^{n}\). The input to NeuroDAVIS is an identity matrix \(\mathbf{I}\) of order \(n\times n\). For an \(i^{th}\) sample \(\mathbf{x}_{i}\), we consider the \(i^{th}\) column vector \(\mathbf{e}_{i}\) of \(\mathbf{I}\). That is, on presenting
Figure 2: NeuroDAVIS Architecture.
\(\mathbf{e}_{i}\) to the _Input layer_, an approximate version \(\tilde{\mathbf{x}}_{i}\) of \(\mathbf{x}_{i}\) is reconstructed at the _Reconstruction layer_. Let \(\mathbf{a}_{ji}\) and \(\mathbf{h}_{ji}\) correspond to the input to and output from a \(j^{th}\) layer on presentation of an \(i^{th}\) sample; \(\mathbf{W}_{j}\) be the weight matrix between \((j-1)^{th}\) layer and \(j^{th}\) layer (\(j=1,2,\cdots,(l+2)\)); and \(\mathbf{b}_{j}\) be the bias term for nodes in \(j^{th}\) layer. A \(j^{th}\) layer may be any of the _Input layer_ (\(j=0\)), _Latent layer_ (\(j=1\)), _Hidden layer(s)_ (\(j=2,3,\ldots,(l+1)\)) or _Reconstruction layer_ (\(j=(l+2)\)). Thus, for the _Input layer_,
\[\begin{cases}\mathbf{a}_{0i}=\mathbf{e}_{i},\\ \mathbf{h}_{0i}=\mathbf{e}_{i},\end{cases}\qquad\forall i=1,2,\cdots,n \tag{1}\]
For the _Latent layer_, we have
\[\begin{cases}\mathbf{a}_{1i}=\mathbf{W}_{1}\mathbf{e}_{i}+\mathbf{b}_{1},\\ \mathbf{h}_{1i}=\mathbf{a}_{1i},\end{cases}\qquad\forall i=1,2,\cdots,n \tag{2}\]
Here, \(\mathbf{e}_{i}\) controls the weight parameters for the low-dimensional embedding of \(i^{th}\) sample obtained at the _Latent layer_. It ensures that only the links connected to the \(i^{th}\) neuron of the _Input layer_ will activate neurons in _Latent layer_ on presentation of \(i^{th}\) sample. For \(l\)_Hidden layer(s)_, we have
\[\begin{cases}\mathbf{a}_{jl}=\mathbf{W}_{j}\mathbf{h}_{(j-1)i}+\mathbf{b}_{j}, \\ \mathbf{h}_{jl}=ReLU(\mathbf{a}_{jl}),\end{cases}\qquad\forall j=2,3,\cdots,(l+ 1),\forall i=1,2,\cdots,n \tag{3}\]
where \(ReLU(\mathbf{y})=max(\mathbf{0},\mathbf{y})\); \(max\) (maximum) being considered element wise.
A reconstruction of the original data is performed at the final layer, called _Reconstruction layer_. For the _Reconstruction layer_, we have
\[\begin{cases}\mathbf{a}_{(l+2)i}=\mathbf{W}_{l+2}\mathbf{h}_{(l+1)i}+\mathbf{ b}_{l+2},\\ \mathbf{h}_{(l+2)i}=\mathbf{a}_{(l+2)i},\end{cases}\qquad\forall i=1,2,\cdots,n \tag{4}\]
Thus, NeuroDAVIS projects the latent embedding for a sample obtained at the _Latent layer_ into a \(d\) dimensional space through the _Hidden layer(s)_.
### Learning
As stated before, NeuroDAVIS has been used for dimensionality reduction and visualization of high dimensional data. For each sample \(\mathbf{x}\), NeuroDAVIS tries to find an optimal reconstruction \(\tilde{\mathbf{x}}\) of the input data by minimizing the reconstruction error \(\|\tilde{\mathbf{x}}-\tilde{\mathbf{x}}\|\). \(L2\) regularization has been used on the nodes' activities and edge weights to avoid overfitting. Usage of \(L2\) regularization ensures minimization of model complexity. It also helps in dragging output and weight values towards zero. Regularization of the nodes' activities and weights have been controlled using regularization parameters \(\alpha\) and \(\beta\) respectively. The objective function thus becomes
\[\mathcal{L}_{NeuroDAVIS}=\frac{1}{n}\sum_{i=1}^{n}\|\mathbf{x}_{i}-\tilde{ \mathbf{x}}_{i}\|^{2}+\alpha\sum_{j=1}^{l+1}\sum_{i=1}^{n}\|\mathbf{h}_{ji}\| _{2}+\beta\sum_{j=1}^{l+1}\|\mathbf{W}_{j}\|_{F} \tag{5}\]
NeuroDAVIS has been trained using Adam optimizer [24]. Values of \(\alpha\) and \(\beta\) have been set experimentally for each dataset. Additionally, the number of epochs at which \(\mathcal{L}_{NeuroDAVIS}\) saturates, has been observed carefully to fix the optimal number of epochs needed for convergence.
At the onset of the forward pass, the weight values between the _Input layer_ and _Latent layer_ are initialized randomly. The learning of the NeuroDAVIS network is controlled by an identity matrix \(\mathbf{I}\) fed as input to the _Input layer_. The usage of the identity matrix ensures updation of the weight values solely associated with the samples in the present batch. Thus, initially, a random low-dimensional embedding of \(n\) samples (observations) is created at the _Latent layer_. This low-dimensional representation at the _Latent layer_ is then projected on to the _Reconstruction layer_ by NeuroDAVIS. At the _Reconstruction layer_, the error value is calculated using Equation 5. This error is then propagated backwards, and the weight and bias values are updated for a better reconstruction of the samples of the present batch, in the next forward pass. This process is continued till convergence. On completion of the training phase, the transformed feature set extracted from the _Latent layer_ serves as the low-dimensional embedding of the data, and used for visualization for \(k=2\) or \(3\).
## 3 Proof of correctness
In this section, we have established mathematically that neighbours in high dimension remain preserved after projection into low dimension.
For any real matrix \(\mathbf{W}\) having Frobenius norm less than or equal to \(1\) and \(\eta\leq 1\), \(\|\mathbf{I}-\eta\mathbf{W}\mathbf{W}^{T}\|_{2}\leq 1\).
Proof.: It is well known that
\[\|\mathbf{I}-\eta\mathbf{W}\mathbf{W}^{T}\|_{2}^{2}=\lambda_{max}(\mathbf{I}- \eta\mathbf{W}\mathbf{W}^{T})^{2},\]
where \(\lambda_{max}(A)\) represents the largest eigen value of \(A\).
Now, \(\mathbf{W}\mathbf{W}^{T}\) is a positive semi-definite matrix. Hence, all eigen values of \(\mathbf{W}\mathbf{W}^{T}\) are non-negative. Also,
\[\lambda_{max}(\mathbf{W}\mathbf{W}^{T})\leq\|\mathbf{W}\|_{F}\leq 1\]
Therefore, all eigen values of \(\mathbf{W}\mathbf{W}^{T}\) lie in \([0,1]\) and \(\eta\leq 1\), which implies \(\lambda_{max}(\mathbf{I}-\eta\mathbf{W}\mathbf{W}^{T})^{2}\leq 1\)
Let \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) be two points in \(d\)-dimensional space \((\mathbf{x}_{i},\mathbf{x}_{j}\in\mathbb{R}^{d})\) such that they belong to a \(\delta\)-ball, i.e., \(\|\mathbf{x}_{i}-\mathbf{x}_{j}\|<\delta\), a predefined small positive number. Their corresponding low-dimensional embedding, generated by NeuroDAVIS, will come closer in each iteration during training if the weight matrix from the final layer has a Frobenius norm less than \(1\).
Proof.: Let us consider a simple NeuroDAVIS model with no hidden layer and no regularization. Let \(\mathbf{y}_{i}\) and \(\mathbf{y}_{j}\) be the corresponding initial low-dimensional embeddings of \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\). Then
\[\mathbf{y}_{i}=\mathbf{W}_{1}\mathbf{e}_{i}+\mathbf{b}_{1}=\mathbf{W}_{1,i}+ \mathbf{b}_{1},\]
\(\mathbf{e}_{i}\) being the \(i^{th}\) column of \(\mathbf{I}\). During each forward pass, NeuroDAVIS tries to reconstruct the original data point. Let \(\tilde{\mathbf{x}}_{i}\) be a reconstruction of \(\mathbf{x}_{i}\), i.e., \(\tilde{\mathbf{x}}_{i}=(\mathbf{W}_{1,i}+\mathbf{b}_{1})\mathbf{W}_{2}+ \mathbf{b}_{2}\).
Therefore, the loss function can be written as
\[\mathcal{L}_{NeuroDAVIS}=\frac{1}{2n}\sum_{i=1}^{n}\|\mathbf{x}_{i}-\tilde{ \mathbf{x}}_{i}\|^{2},\]
The corresponding weight and bias values are updated using \(\mathbf{W}^{(t+1)}=\mathbf{W}^{(t)}-\eta\nabla\mathbf{W}^{(t)}\).
Now,
\[\frac{\partial\mathcal{L}}{\partial\mathbf{W}_{1,i}}=(\mathbf{x}_{i}-\tilde{ \mathbf{x}}_{i})(-\frac{\partial\tilde{\mathbf{x}}_{i}}{\partial\mathbf{W}_{1,i}})=-(\mathbf{x}_{i}-\tilde{\mathbf{x}}_{i})\frac{\partial}{\partial\mathbf{ W}_{1,i}}(\mathbf{W}_{1,i}\mathbf{W}_{2})=-(\mathbf{x}_{i}-\tilde{\mathbf{x}}_{i}) \mathbf{W}_{2}^{T}\]
At \(t^{th}\) iteration, the weight updation occurs as \(\mathbf{W}_{1,i}^{(t+1)}=\mathbf{W}_{1,i}^{(t)}+\eta(\mathbf{x}_{i}-\tilde{ \mathbf{x}}_{i})\mathbf{W}_{2}^{(t)T}\)
Thus,
\[\mathbf{y}_{i}^{(t+1)}-\mathbf{y}_{j}^{(t+1)} =\mathbf{W}_{1,i}^{(t+1)}-\mathbf{W}_{1,j}^{(t+1)}\] \[=[\mathbf{W}_{1,i}^{(t)}+\eta(\mathbf{x}_{i}-\tilde{\mathbf{x}} _{i})\mathbf{W}_{2}^{(t)T}]-[\mathbf{W}_{1,j}^{(t)}+\eta(\mathbf{x}_{j}-\tilde {\mathbf{x}}_{j})\mathbf{W}_{2}^{(t)T}]\] \[=[\mathbf{W}_{1,i}^{(t)}-\mathbf{W}_{1,j}^{(t)}]+\eta([\mathbf{x }_{i}-\mathbf{x}_{j})-(\tilde{\mathbf{x}}_{i}-\tilde{\mathbf{x}}_{j})] \mathbf{W}_{2}^{(t)T}\] \[\approx[\mathbf{W}_{1,i}^{(t)}-\mathbf{W}_{1,i}^{(t)}]-\eta( \tilde{\mathbf{x}}_{i}-\tilde{\mathbf{x}}_{j})\mathbf{W}_{2}^{(t)T}\] \[\approx[\mathbf{W}_{1,i}^{(t)}-\mathbf{W}_{1,j}^{(t)}]-\eta[ \mathbf{W}_{1,i}^{(t)}-\mathbf{W}_{1,j}^{(t)}]\mathbf{W}_{2}^{(t)T}\] \[\|\mathbf{y}_{i}^{(t+1)}-\mathbf{y}_{j}^{(t+1)}\| \approx\|[\mathbf{W}_{1,i}^{(t)}-\mathbf{W}_{1,j}^{(t)}]\|\mathbf{I}- \eta\mathbf{W}_{2}^{(t)}\mathbf{W}_{2}^{(t)T}\|\] \[\leq\|\mathbf{W}_{1,i}^{(t)}-\mathbf{W}_{1,j}^{(t)}\|\mathbf{I} \|-\eta\mathbf{W}_{2}^{(t)}\mathbf{W}_{2}^{(t)T}\|\] \[\leq\|\mathbf{W}_{1,i}^{(t)}-\mathbf{W}_{1,j}^{(t)}\|\|\mathbf{I} \|-\eta\mathbf{W}_{2}^{(t)}\mathbf{W}_{2}^{(t)T}\|\] \[\leq\|\mathbf{W}_{1,i}^{(t)}-\mathbf{W}_{1,j}^{(t)}\|\|\mathbf{ By Lemma 1}\]
Thus,
\[\|\mathbf{y}_{i}^{(t+1)}-\mathbf{y}_{j}^{(t+1)}\|\leq\|\mathbf{y}_{1}^{(t)}- \mathbf{y}_{j}^{(t)}\|\]
## 4 Results
The performance of NeuroDAVIS has been evaluated on a wide variety of datasets including 2D synthetic datasets and high-dimensional (HD) datasets of different modalities, like numeric, text, image and biological data (single-cell RNA-sequencing (scRNA-seq) data). The datasets used for evaluation have been described in Table 1. This section explains the experiments carried out in this work to evaluate the performance of NeuroDAVIS, and analyze the results. Here we have focussed on the capability of NeuroDAVIS in low dimensional embedding, visualization, structure preservation, inter-cluster distance preservation and cluster size preservation.
### Performance evaluation on synthetic 2D datasets
In order to demonstrate the effectiveness of NeuroDAVIS, we have initially applied NeuroDAVIS on four sythetic 2D datasets, viz., Elliptic Ring, Olympic, Spiral and Shape. The results have been compared with that obtained using t-SNE and UMAP.
#### 4.1.1 Embedding, visualization and structure preservation
Elliptic Ring dataset (Figure 2(a)) consists of two small Gaussian balls within an outer Gaussian elliptic ring; Olympic dataset (Figure 2(e)) contains five circular rings representing the olympic logo; Spiral dataset (Figure 2(i)) contains three concentric spirals, while Shape dataset (Figure 2(m)) consists of points representing the characters 'S', 'H', 'A', 'P' and 'E'.
We have observed that the NeuroDAVIS-generated embedding of Elliptic Ring dataset (Figure 2(b)) contains two Gaussian balls inside the outer Gaussian ring, similar to the original data. Likewise, the distances between the rings in Olympic dataset and that between the concentric spirals in Spiral dataset, as well as their shapes, have been preserved in their NeuroDAVIS-generated embeddings (Figures 2(f) and 2(j)). Furthermore for Shape dataset, the readability of the characters 'S', 'H', 'A', 'P' and 'E' have not been compromised by NeuroDAVIS (Figure 2(n)), which is totally lost in the cases of t-SNE and UMAP embeddings. Thus, we can say that NeuroDAVIS has been able to represent the clusters within the data similar to their original distributions, preserving both shape and size. Both t-SNE and UMAP, however, have failed to preserve original distributions, and produced compact clusters instead, disrupting both local and global structures within the data.
We have further compared the pairwise distances in the original distribution to that in the embedding produced by t-SNE, UMAP and NeuroDAVIS, using a Spearman rank correlation. Figure 4 shows the results for ten different executions of the same. We have observed that the correlation coefficient values obtained by NeuroDAVIS have been better than that produced by t-SNE and UMAP for all the datasets.
We have then performed another experiment on these synthetic datasets, projecting the two-dimensional space into nine-dimensional space using the transformation \((x+y,\,x-y,\,xy,\,x^{2},\,y^{2},\,x^{2}y,\,xy^{2},\,x^{3},\,y^{3})\) as perfomed in [10], and applying NeuroDAVIS to project the data back into a two-dimensional space. Results have been compared to those obtained by t-SNE and UMAP for the same transformation. Figure S1 (in Supplementary Material) shows that NeuroDAVIS has once again produced better embeddings, preserving both cluster shapes and sizes, as compared to t-SNE and UMAP.
\begin{table}
\begin{tabular}{||c||c||c||c||c||c||} \hline
**Category** & **Name** & **\#Samples** & **\#Features** & **\#Classes** & **Source** \\ \hline \multirow{6}{*}{2D} & \multirow{3}{*}{Numeric} & EllipticRing & 1100 & 2 & 3 & \\ \cline{3-6} & & Olympic & 2500 & 2 & 5 & \\ \cline{3-6} & & Spiral & 312 & 2 & 3 & Synthetic \\ \cline{3-6} & & Shape & 2000 & 2 & 5 & \\ \cline{3-6} & & World Map & 2843 & 2 & 5 & \\ \hline \multirow{6}{*}{HD} & \multirow{3}{*}{Numeric} & Breast cancer & 569 & 30 & 2 & [25] \\ \cline{3-6} & & Wine & 178 & 13 & 3 & [26] \\ \cline{2-6} & Text & Spam & 5572 & 513 & 2 & [27] \\ \cline{2-6} & \multirow{2}{*}{Image} & Coil20 & 1440 & 16385 & 20 & [28] \\ \cline{3-6} & & Fashion MNIST & 60000 & 784 & 10 & [29] \\ \cline{2-6} & & Usoskin & 622 & 25334 & 13 & [30] \\ \cline{2-6} & & Jurkat & 3388 & 32738 & 11 & [31] \\ \hline \end{tabular}
\end{table}
Table 1: Description of datasets used for evaluation of NeuroDAVIS
Figure 3: The original distributions and the embeddings produced by NeuroDAVIS, t-SNE and UMAP for Elliptic Ring ((a)-(d)), Olympic ((e)-(h)), Spiral ((i)-(l)) and Shape ((m)-(p)) datasets.
#### 4.1.2 Global structure preservation
In order to assess the performance of a dimension reduction/visualization method, we need to analyze the reduced dimensional embedding in terms of its capability to represent clusters in a way similar to their distribution in the high dimension. For this purpose, it is essential to study the inter-cluster separations within the data. Hence, we have performed a few more experiments to ascertain how well NeuroDAVIS preserves the inter-cluster distances in the low dimensional space. Information on the ground truth is important for this kind of analyses. For this reason, we have created a synthetic 2D dataset representing the world map (a known structure) with five clusters / continents, viz., Eurasia, Australia, North America, South America and Africa (no Antarctica), following the tutorial ([https://towardsdatascience.com/tsne-vs-unmap-global-structure-4d8045acba17](https://towardsdatascience.com/tsne-vs-unmap-global-structure-4d8045acba17)) (Figure 5a). The NeuroDAVIS-generated embedding for World Map dataset has been compared with its original data distribution, and the corresponding embeddings generated by t-SNE and UMAP (Figure 5). As evident from Figure 5, unlike t-SNE and UMAP, the shapes and sizes of the clusters in the NeuroDAVIS-generated embedding are quite similar to that in the original data.
#### 4.1.3 Inter-cluster distance preservation
In order to check if the inter-cluster distances between the five continents have been preserved, we have calculated the Spearman rank correlation coefficients between the distances among the five cluster centroids obtained by NeuroDAVIS, and compared the result with those obtained using t-SNE and UMAP. As shown in Figure 6, repeated runs of the same experiment have confirmed that NeuroDAVIS has performed much better than t-SNE and UMAP, with respect to distance preservation.
We have further evaluated the NeuroDAVIS-embedding for World Map dataset for its ability to preserve pairwise euclidean distances between points in one cluster and points in the remaining clusters. Figure 7 containing results from repeated runs of this experiment, shows that for each of the clusters Eurasia, Australia, North America, South America and Africa, NeuroDAVIS has been able to preserve the original pairwise distances better than t-SNE and UMAP.
Figure 4: Spearman rank correlation between pairwise distances in the original distribution, and pairwise distances in t-SNE, UMAP and NeuroDAVIS-produced embeddings of the 2D synthetic datasets Elliptic Ring, Olympic, Spiral and Shape. For Elliptic Ring dataset, median correlation coefficient values obtained are 0.98 (NeuroDAVIS), 0.27 (t-SNE) and 0.32 (UMAP). For Olympic dataset, median correlation coefficient values obtained are 0.96 (NeuroDAVIS), 0.61 (t-SNE) and 0.73 (UMAP). For Spiral dataset, median correlation coefficient values obtained are 0.98 (NeuroDAVIS), 0.96 (t-SNE) and \(-\)0.09 (UMAP). For Shape dataset, median correlation coefficient values obtained are 0.97 (NeuroDAVIS), 0.57 (t-SNE) and 0.40 (UMAP). The corresponding p-values obtained using Mann-Whitney U test are 0.0001 (both NeuroDAVIS-t-SNE and NeuroDAVIS-UMAP) for Elliptic Ring, Olympic and Shape datasets. For Spiral dataset, p-values obtained are 0.009 (NeuroDAVIS-t-SNE) and 0.001 (NeuroDAVIS-UMAP).
Figure 5: The original distribution and the embeddings produced by NeuroDAVIS, t-SNE and UMAP for World Map dataset.
Figure 6: Spearman rank correlation between original distances among cluster centroids in the original dataset and distances between cluster centroids in the NeuroDAVIS, t-SNE and UMAP-generated embedding. The median correlation coefficient values are 0.96 (NeuroDAVIS), 0.59 (t-SNE) and 0.55 (UMAP) for World Map dataset. The corresponding p-value obtained using Mann-Whitney U test is 0.0001 (both NeuroDAVIS-t-SNE and NeuroDAVIS-UMAP)).
Figure 7: Preservation of pairwise distances between clusters (continents) in World Map dataset by NeuroDAVIS, t-SNE and UMAP.
#### 4.1.4 Cluster size preservation
We wondered whether the sizes of the clusters in the original data have been preserved in its NeuroDAVIS-generated embedding. Therefore, in order to ensure preservation of cluster sizes, we have carried out a few additional experiments on this World Map dataset. For each of the clusters in the dataset, we have estimated the area of the minimal rectangles bordering the clusters. This has been realized by considering the amount of spread of each cluster in either directions, as shown in Figure 5a. We have then computed the Pearson correlation coefficients between the original area of the clusters to their reconstructed counterparts. Repeating this experiment multiple times, we have arrived at the conclusion that NeuroDAVIS has been able to preserve original sizes of the clusters better than t-SNE and UMAP, as shown in Figure S2 (in Supplementary Material). All these experiments discussed above establish the superiority of NeuroDAVIS over t-SNE and UMAP in terms of global shape preservation.
### Performance evaluation on high-dimensional datasets
Subsequently the effectiveness of NeuroDAVIS has been demonstrated on several categories of high-dimensional datasets including numeric, textual, image and biological data. The general steps used for evaluation are as follows.
We have first reduced each dataset to two NeuroDAVIS dimensions. For each dataset, the quality of the NeuroDAVIS embedding has then been compared with that produced by each of t-SNE, UMAP and Fit-SNE, in two different aspects. First, we have calculated the Spearman correlation coefficient between the pairwise distances of the data points in original space and that in the latent space, produced by NeuroDAVIS, t-SNE, UMAP and Fit-SNE. Finally, for all the high dimensional datasets excluding the biological one, the embedding produced by NeuroDAVIS has been assessed for its classification performance. A train:test split in \(80:20\) ratio has been used to generate training and test datasets from the NeuroDAVIS-generated embedding. Two classifiers, viz., k-nn and random forest (RF), have been trained and the held out test dataset has been used to evaluate the performance of the classifiers. The results have been compared with those obtained using the same classifiers on t-SNE, UMAP and Fit-SNE-embeddings. For the scRNA-seq (biological) dataset, we have performed cell-type clustering using k-means and hierarchical agglomerative clustering on the NeuroDAVIS-embedding, and compared the results with those obtained on t-SNE, UMAP and Fit-SNE embeddings. We have further compared the results obtained with two recent benchmarked methods, viz., PHATE [15] and IVIS [23], specially developed for scRNA-seq datasets. In order to measure the performance of NeuroDAVIS and the other methods, we have used some measures used in [32]. Classification performance for the non-biological datasets has been measured using Accuracy and F1-scores, while two external indices Adjusted Rand Index (ARI) and Fowlkes Mallows Index (FMI) have been used to measure the quality of clusters produced in the case of scRNA-seq datasets.
#### 4.2.1 Numeric datasets
Two numeric datasets Breast cancer and Wine, originally available at [https://archive.ics.uci.edu/ml/datasets.php](https://archive.ics.uci.edu/ml/datasets.php), have been downloaded from [33] in order to evaluate the performance of NeuroDAVIS. The NeuroDAVIS-generated embedding as compared to those obtained using t-SNE, UMAP and Fit-SNE for Breast cancer dataset are shown in Figures 8a, 8b, 8c and 8d, while Figures 8e, 8f, 8g and 8h show similar embeddings for Wine dataset. The correlation coefficient values, as shown in Figure 9, reveal that pairwise distances in high dimensions are more correlated with those in NeuroDAVIS embedding (median correlation coefficient = 0.94 (Breast cancer), 0.92 (Wine)) than those in t-SNE (median correlation coefficient = 0.76 (Breast cancer), 0.91 (Wine)), UMAP (median correlation coefficient = 0.77 (Breast cancer), 0.82 (wine)) and Fit-SNE embedding (median correlation coefficient = 0.77 (Breast cancer), 0.92 (wine)). The corresponding p-values obtained using Mann-Whitney U test are 0.0001 (for all pairs, viz., NeuroDAVIS-t-SNE, NeuroDAVIS-UMAP and NeuroDAVIS-Fit-SNE) for Breast cancer dataset, and 0.0001 (NeuroDAVIS-UMAP), 0.733 (NeuroDAVIS-t-SNE) and 1.000 (NeuroDAVIS-Fit-SNE) for Wine dataset.
The results for classification (accuracy and F1-score) using k-nn and RF on the NeuroDAVIS-embedding, as shown in Figure S3 (in Supplementary Material), also reflect superior performance of NeuroDAVIS over t-SNE, UMAP and Fit-SNE for Wine dataset, while for Breast cancer dataset, NeuroDAVIS-embedding has displayed comparable classification performance to both t-SNE, UMAP and Fit-SNE embeddings. Results have been recorded by repeating each experiment multiple times.
Figure 8: Embeddings produced by NeuroDAVIS, t-SNE, UMAP and Fit-SNE for Breast cancer ((a)-(d)), Wine ((e)-(h)), Spam ((i)-(l)) and Coil20 ((m)-(p)) datasets
sages are classified as spam messages. This dataset has undergone preprocessing using standard pipelines used for textual datasets, which include the following major steps: 1. Removal of URLs, 2. Conversion into lower case, 3. Removal of punctuations, 4. Removal of extra whitespaces, 5. Removal of stopwords, 6. Lemmatization, 7. Tokenization, and 8. min-max scaling.
The two-dimensional embedding obtained by NeuroDAVIS on the preprocessed Spam dataset has been shown in Figure 8i. The corresponding t-SNE, UMAP and Fit-SNE embeddings on the same dataset have been shown in Figures 8j, 8k and 8l respectively. Although the embeddings look quite similar to each other, on close observation, it can be seen that quite contrary to NeuroDAVIS and UMAP, the ham (not spam) cluster in both the t-SNE and Fit-SNE embedding portray multiple sub-clusters, which is unrealistic. Figure 9 also reveals that the correlation coefficient values for the NeuroDAVIS embedding (median correlation coefficient = 0.77) is far better than those obtained for the t-SNE (median correlation coefficient = 0.59), UMAP (median correlation coefficient = 0.57) and Fit-SNE (median correlation coefficient = 0.61) embeddings. The correponding p-value obtained using Mann-Whitney U test is 0.0001 (for NeuroDAVIS-t-SNE, NeuroDAVIS-UMAP and NeuroDAVIS-Fit-SNE). The classification performance on NeuroDAVIS embedding of Spam dataset is, however, not as good as that on the t-SNE, UMAP or Fit-SNE embeddings, as shown in Figure S3 (in Supplementary Material).
#### 4.2.3 Image datasets
Subsequently, we have evaluated NeuroDAVIS on an image dataset Coil20. Coil20 dataset [28] contains images of 20 different objects with backgrounds discarded. Figures 8m, 8n, 8o and 8p show the NeuroDAVIS, t-SNE, UMAP and Fit-SNE embeddings obtained on this dataset respectively. We have observed that similar to the numeric and textual datasets, NeuroDAVIS has been able to preserve the distances between objects in high dimension, better than t-SNE and UMAP. The median correlation coefficient value reported by NeuroDAVIS has been 0.61, while that for the t-SNE, UMAP and Fit-SNE embeddings have been 0.53, 0.15 and 0.56 respectively. Figure 9 shows the distribution of correlation coefficient values obtained from multiple executions of this experiment. The corresponding p
Figure 9: Spearman rank correlation coefficient between pairwise distances in original distribution, and pairwise distances in NeuroDAVIS, t-SNE, UMAP and Fit-SNE embeddings of Breast Cancer, Wine, Spam and Coil20 datasets. For Breast Cancer dataset, median correlation coefficient values obtained are 0.94 (NeuroDAVIS), 0.76 (t-SNE), 0.77 (UMAP) and 0.77 (Fit-SNE). For Wine dataset, median correlation coefficient values obtained are 0.92 (NeuroDAVIS), 0.91 (t-SNE), 0.82 (UMAP) and 0.92 (Fit-SNE). For Spam dataset, median correlation coefficient values obtained are 0.77 (NeuroDAVIS), 0.59 (t-SNE), 0.57 (UMAP) and 0.61 (Fit-SNE), while for Coil20 dataset, these values are 0.61 (NeuroDAVIS), 0.53 (t-SNE), 0.15 (UMAP) and 0.56 (Fit-SNE). The corresponding p-values obtained using Mann-Whitney U test are 0.0001 (both NeuroDAVIS-t-SNE and NeuroDAVIS-UMAP) for both Breast Cancer and Spam datasets. For Wine dataset, p-values obtained using Mann-Whitney U test are 0.73 (NeuroDAVIS-t-SNE), 0.002 (NeuroDAVIS-UMAP) and 1.00 (NeuroDAVIS-Fit-SNE), while for Coil20 dataset, p-values obtained using Mann-Whitney U test are 0.0007 (NeuroDAVIS-t-SNE), 0.0001 (NeuroDAVIS-UMAP) and 0.002 (NeuroDAVIS-Fit-SNE).
Figure 10: (a)NeuroDAVIS (c) t-SNE (e) UMAP (g) Fit-SNE embeddings obtained on Fashion MNIST dataset, (b) NeuroDAVIS (d) t-SNE (f) UMAP (h) Fit-SNE embeddings obtained on the first 50 PCA components of the original Fashion MNIST dataset.
values obtained using Mann-Whitney U test are 0.0007 (NeuroDAVIS-t-SNE), 0.0001 (NeuroDAVIS-UMAP) and 0.002(NeuroDAVIS-Fit-SNE).
The classification accuracy and F1-scores achieved by k-nn and RF classifiers on the NeuroDAVIS embedding for Coil20 image dataset have been better than that achieved by UMAP. However, t-SNE and Fit-SNE have produced better classification results on this dataset, as depicted in Figure S3 (in Supplementary Material).
In order to demonstrate the effectiveness of NeuroDAVIS on large datasets, we have additionally evaluated NeuroDAVIS on another image dataset, called Fashion MNIST [29], containing 60k training images of clothings of \(28\times 28\) pixels each. Figures 10a, 10c, 10e and 10g show NeuroDAVIS, t-SNE, UMAP and Fit-SNE embeddings of Fashion MNIST datasets respectively. To obtain high-quality embeddings from high dimensional datasets, researchers often use PCA as a preprocessing step [23]. For this reason, we have performed a futher investigation applying NeuroDAVIS, t-SNE, UMAP and Fit-SNE on the first 50 principal components obtained from Fashion MNIST data, and comparing the results with that produced by the sole usage of NeuroDAVIS, t-SNE, UMAP and Fit-SNE. Figures 10b, 10d, 10f and 10h show the effect of using PCA as a preprocessing step before applying NeuroDAVIS, t-SNE, UMAP or Fit-SNE.
It is often difficult to assess the quality of clusters visually. Hence, in order to quantify the quality of clusters, we have obtained the Spearman rank correlation coefficients between the pairwise distances of the cluster centroids in high-dimension and the pairwise distances of the cluster centroids in the low-dimensional embedding. A high correlation coefficient signifies better preservation of inter-cluster distances. Here, we have observed that NeuroDAVIS has produced a higher correlation coefficient of 0.93, compared to that produced by t-SNE (correlation coefficient = 0.70), UMAP (correlation coefficient = 0.91) and Fit-SNE (correlation coefficient = 0.89), as shown in Figure 11. Interestingly, applying PCA for preprocessing has produced lower correlation coefficient for NeuroDAVIS (correlation coefficient = 0.86). It has improved the result for t-SNE (correlation coefficient = 0.72) and UMAP (correlation coefficient = 0.93), while FitSNE has not been affected by the usage of PCA as a preprocessing step (correlation coefficient = 0.89), which are shown in Figure 11. These results have led us to infer that t-SNE or UMAP perform better when PCA is used as a preprocessing tool. NeuroDAVIS, on the other hand, is able to produce high quality embeddings independently. PCA may have hampered the embedding quality due to loss of information during PCA-based preprocessing prior to applying NeuroDAVIS.
#### 4.2.4 Biological datasets
Finally, we have explored the effectiveness of NeuroDAVIS on some biological datasets. We have used two scRNA-seq datasets for this purpose. Usoskin dataset [30] contains gene expression values for 622 cells across 25334 genes, while Jurkat dataset [31] contains gene expression values for 3388 cells across 32738 genes. Both these datasets have been preprocessed using Scanpy [34] following standard procedures as recommended in the tutorial
Figure 11: Spearman rank correlation coefficients between pairwise distances of the cluster centroids in high-dimension and the pairwise distances of the cluster centroids in the low-dimensional embedding of Fashion MNIST dataset produced by NeuroDAVIS, t-SNE, UMAP and Fit-SNE with and without PCA-based preprocessing.
[https://scanpy-tutorials.readthedocs.io/en/latest/pbmc3k.html](https://scanpy-tutorials.readthedocs.io/en/latest/pbmc3k.html). Apart from t-SNE, UMAP and Fit-SNE, we have considered PHATE and IVIS for performance comparison on these datasets which are among the current state-of-the-art algorithms for scRNA-seq datasets besides t-SNE and UMAP. It may be mentioned here that in both these datasets, the number of samples is much lower than the number of dimensions, which escalates additional challenge in learning.
The two-dimensional embeddings obtained by NeuroDAVIS, t-SNE, UMAP, Fit-SNE, PHATE and IVIS for Usoskin and Jurkat datasets, as shown in Figures 12 and 13 respectively, have revealed that both t-SNE and UMAP have been unable to represent the clusters in the data clearly. For Usoskin dataset, NeuroDAVIS has been able to represent some of the clusters well, while for Jurkat dataset, clusters in the NeuroDAVIS projection has been much more dense and well-separated, as compared to the other methods. Overall, it can be said that NeuroDAVIS projections are better than t-SNE, UMAP and Fit-SNE projections, and are comparable to PHATE or IVIS projections only.
The correlation coefficient values measured between the pairwise distances in the original data and NeuroDAVIS-, t-SNE-, UMAP-, Fit-SNE-, PHATE- and IVIS-generated embeddings have been reported in Figure 14. We have once again observed that NeuroDAVIS has shown better correlation (median correlation coefficient = 0.41) than all other methods for Jurkat dataset, while for Usoskin dataset, NeuroDAVIS has reported a median correlation coefficient of 0.23, better than all other methods except t-SNE (median correlation coefficient = 0.27) and Fit-SNE (median correlation coefficient = 0.29). The corresponding p-values obtained using Mann-Whitney U test are 0.0001 (NeuroDAVIS-t-SNE, NeuroDAVIS-UMAP, NeuroDAVIS-Fit-SNE, NeuroDAVIS-PHATE and NeuroDAVIS-IVIS) for Jurkat dataset, while for Usoskin dataset, p-values obtained using Mann-Whitney U test are 0.677 (NeuroDAVIS-t-SNE), 0.472 (NeuroDAVIS-Fit-SNE) and 0.0001 (NeuroDAVIS-UMAP, NeuroDAVIS-PHATE and NeuroDAVIS-IVIS).
As demonstrated in Figure S4 (in Supplementary Material), we have further observed that the clustering performance of k-means and agglomerative hierarchical clustering methods on NeuroDAVIS embedding of Usoskin data has been better than t-SNE and IVIS embeddings, but poorer than PHATE and Fit-SNE embeddings, while being closely
Figure 12: 2-dimensional embeddings of Usoskin data generated by NeuroDAVIS, t-SNE, UMAP, Fit-SNE, PHATE and IVIS respectively.
comparable to that of UMAP embedding, in terms of ARI and FMI scores. For Jurkat dataset, NeuroDAVIS embedding has, however, resulted in the best clustering performance in terms of both ARI and FMI scores, which can only be challenged by IVIS embedding.
## 5 Discussion and Conclusion
In this work, we have developed a novel unsupervised deep learning model, called NeuroDAVIS, for data visualization. NeuroDAVIS can produce a low-dimensional embedding from the data, extracting relevant features useful for subsequent analysis pipelines. The effectiveness of NeuroDAVIS has been demonstrated on a wide variety of datasets including 2D synthetic data, and high dimensional numeric, textual, image and biological data. It has shown excellent visualization capability supported by outstanding downstream analysis (clustering/classification), thus making it competitive against state-of-the-art methods, like t-SNE, UMAP and Fit-SNE. Results obtained are also statistically significant in favour of NeuroDAVIS. The strength of NeuroDAVIS lies in the fact that it is able to preserve cluster shape, size and structure within the data well. As demonstrated mathematically, the low-dimensional embedding produced by NeuroDAVIS has also been observed to preserve local relationships among observations in high dimension. Furthermore, it is capable of extracting the inherent non-linearity within the data. In addition, NeuroDAVIS does not presume data distributions, which makes the method more practical. The model can be termed as a general purpose dimension reduction and visualization system having no restrictions on embedding dimension. It is capable of producing interpretable visualizations independently without any preprocessing.
NeuroDAVIS, being a dimension reduction and visualization method, does not come without its weaknesses. It takes as input an identity matrix of order equal to number of observations in the input data. Thus, it consumes more space in memory than other existing methods. Moreover, hyper-parameters in NeuroDAVIS are sensitive to tiny modifications, and network tuning requires substantial amount of time and effort by an amateur.
Figure 13: 2-dimensional embeddings of Jurkat data generated by NeuroDAVIS, t-SNE, UMAP, Fit-SNE, PHATE and IVIS respectively.
Nevertheless, NeuroDAVIS serves as a competing method for visualization of high-dimensional datasets, producing a robust latent embedding better than some of the existing benchmarked methods, like t-SNE, UMAP and Fit-SNE, while also resolving the local-global shape preservation challenge. Thus, in this work, we have introduced a single solution to the long-standing problem of visualization of datasets belonging to different categories/domains/modalities. Adding a feature selection module to the existing architecture could be a future extension to NeuroDAVIS. It might also be extended to visualize high-dimensional multi-modal datasets, by producing low-dimensional embeddings that can capture significant features from the data.
## Data and Code availability
NeuroDAVIS has been implemented in Python 3. The codes to reproduce the results are available at [https://github.com/shallowlearner93/NeuroDAVIS](https://github.com/shallowlearner93/NeuroDAVIS). The preprocessed datasets used in this work can be downloaded from [https://doi.org/10.5281/zenodo.7315674](https://doi.org/10.5281/zenodo.7315674).
## Supplementary information
The supplementary figures have been incorporated in the Supplementary file.
## Authors' contributions
Conceptualization of Methodology: CM, DBS, RKD. Data Curation, Data analysis, Formal analysis, Visualization, Investigation, Implementation, Validation, Original draft preparation: CM, DBS. Validation, Reviewing, Editing, Overall Supervision: DBS, RKD.
Figure 14: Spearman rank correlation between pairwise distances in original distribution and pairwise distances in NeuroDAVIS, t-SNE, UMAP, Fit-SNE, PHATE and IVIS-produced embeddings of Usoskin and Jurkat datasets. For Usoskin dataset, median correlation coefficient values obtained are 0.24 (NeuroDAVIS), 0.27 (t-SNE), 0.02 (UMAP), 0.29 (Fit-SNE), \(-0.12\) (PHATE) and 0.002 (IVIS). For Jurkat dataset, median correlation coefficient values obtained are 0.42 (NeuroDAVIS), 0.29 (t-SNE), 0.26 (UMAP), 0.29 (Fit-SNE), 0.23 (PHATE) and 0.25 (IVIS). The corresponding p-values obtained using Mann-Whitney U test for Usoskin dataset are 0.677 (NeuroDAVIS-t-SNE), 0.472 (NeuroDAVIS-Fit-SNE) 0.0001 (NeuroDAVIS-UMAP, NeuroDAVIS-PHATE and NeuroDAVIS-IVIS), and 0.0001 (NeuroDAVIS-t-SNE, NeuroDAVIS-UMAP, NeuroDAVIS-Fit-SNE, NeuroDAVIS-PHATE and NeuroDAVIS-IVIS) for Jurkat dataset.
## Acknowledgments
This work is supported by DST-NSF grant provided to RKD through IDEAS-TIH, ISI Kolkata.
## Declarations
DBS works as an Associate Data Scientist at Tatras Data Services Pvt. Ltd. He has received no funds for this work.
|
2301.07099 | Adaptive Deep Neural Network Inference Optimization with EENet | Well-trained deep neural networks (DNNs) treat all test samples equally
during prediction. Adaptive DNN inference with early exiting leverages the
observation that some test examples can be easier to predict than others. This
paper presents EENet, a novel early-exiting scheduling framework for multi-exit
DNN models. Instead of having every sample go through all DNN layers during
prediction, EENet learns an early exit scheduler, which can intelligently
terminate the inference earlier for certain predictions, which the model has
high confidence of early exit. As opposed to previous early-exiting solutions
with heuristics-based methods, our EENet framework optimizes an early-exiting
policy to maximize model accuracy while satisfying the given per-sample average
inference budget. Extensive experiments are conducted on four computer vision
datasets (CIFAR-10, CIFAR-100, ImageNet, Cityscapes) and two NLP datasets
(SST-2, AgNews). The results demonstrate that the adaptive inference by EENet
can outperform the representative existing early exit techniques. We also
perform a detailed visualization analysis of the comparison results to
interpret the benefits of EENet. | Fatih Ilhan, Ka-Ho Chow, Sihao Hu, Tiansheng Huang, Selim Tekin, Wenqi Wei, Yanzhao Wu, Myungjin Lee, Ramana Kompella, Hugo Latapie, Gaowen Liu, Ling Liu | 2023-01-15T04:37:51Z | http://arxiv.org/abs/2301.07099v2 | # EENet: Learning to Early Exit for
###### Abstract
Budgeted adaptive inference with early exits is an emerging technique to improve the computational efficiency of deep neural networks (DNNs) for edge AI applications with limited resources at test time. This method leverages the fact that different test data samples may not require the same amount of computation for a correct prediction. By allowing early exiting from full layers of DNN inference for some test examples, we can reduce latency and improve throughput of edge inference while preserving performance. Although there have been numerous studies on designing specialized DNN architectures for training early-exit enabled DNN models, most of the existing work employ hand-tuned or manual rule-based early exit policies. In this study, we introduce a novel multi-exit DNN inference framework, coined as EENet, which leverages multi-objective learning to optimize the early exit policy for a trained multi-exit DNN under a given inference budget. This paper makes two novel contributions. First, we introduce the concept of early exit utility scores by combining diverse confidence measures with class-wise prediction scores to better estimate the correctness of test-time predictions at a given exit. Second, we train a lightweight, budget-driven, multi-objective neural network over validation predictions to learn the exit assignment scheduling for query examples at test time. The EENet early exit scheduler optimizes both the distribution of test samples to different exits and the selection of the exit utility thresholds such that the given inference budget is satisfied while the performance metric is maximized. Extensive experiments are conducted on five benchmarks, including three image datasets (CIFAR-10, CIFAR-100, ImageNet) and two NLP datasets (SST-2, AgNews). The results demonstrate the performance improvements of EENet compared to existing representative early exit techniques. We also perform an ablation study and visual analysis to interpret the results.
## 1 Introduction
Deep neural networks (DNNs) have shown unprecedented success in various fields such as computer vision and NLP, thanks to the advances in computation technologies (GPUs, TPUs) and the increasing amount of available data to train large and complicated DNNs. However, these models usually have very high computational cost, which leads to many practical challenges in deployment on edge computing applications, especially for edge clients with limited resources such as smartphones, IoT devices, and embedded devices (Goodfellow et al., 2016; Laskaridis et al., 2021; Teerapittayanon et al., 2017). With this motivation, there has been a significant research focus on improving computational efficiency of DNN models, especially at inference time. To this end, several efficient techniques, such as model quantization (Gholami et al., 2022), neural network pruning (Ghosh et al., 2022), knowledge distillation (Hinton et al., 2015) and early exiting (Teerapittayanon et al., 2016), have been introduced
in the literature. Among these, early exiting has emerged as an efficient and customizable approach for deploying complex DNNs on edge devices, thanks to modular implementation and flexibility in terms of supporting multi-fidelity application scenarios.
Early exiting employs the idea of injecting early exit classifiers into some intermediate layers of a deep learning model and gaining the capability to adaptively stop inference at one of these early exits in runtime (Laskaridis et al., 2021). In particular, this technique enables several different application scenarios for efficient inference such as subnet-based inference where a single-exit submodel is selected and deployed based on the constraints of the edge device. Another use case is the budget-constrained adaptive inference where the multi-exit DNN model is deployed under a given inference budget. At model training phase, we need to train the DNN model with the additional multiple exit branches through joint loss optimization. During inference, the early-exit enabled DNN model can elastically adjust how much time to spend on each sample based on an early exit scheduling policy to maximize the overall performance metric under the given inference budget. In this setting, through learning an early exit policy, early exiting has the potential to leverage the fact that all input samples do not have to require the same amount of computation for a correct prediction. This approach provides efficient utilization of the provided inference resources on heterogeneous edge devices by exiting earlier for easier and later for more challenging data samples, as shown for some example images in Figure 1. Even though there is a significant line of research on improving the performance of early-exit neural networks through designing specialized DNN architectures (Huang et al., 2018; Veniat and Denoyer, 2018; Yang et al., 2020; Elbayad et al., 2020) and DNN training algorithms (Li et al., 2019; Phuong and Lampert, 2019), work on optimizing early exit policies is very limited. In the literature, most methods still consider hand-tuned or heuristics-based approaches in early exiting.
With this motivation in mind, we introduce EENet, the first lightweight and budget-driven early exit policy optimization framework, to learn the optimal early-exit policy for adaptive inference given a trained multi-exit model and inference budget. In particular, our approach employs a two-branch neural network optimized on validation predictions to estimate the correctness of a prediction and exit assignment of a sample. The design of EENet makes two original contributions. First, EENet introduces the concept of exit utility scores, and computes the exit utility score for each test input by jointly evaluating and combining two complimentary statistics: (i) the multiple confidence scores that quantify the correctness of the early exit prediction output and (ii) the class-wise prediction scores. This enables EENet to handle the cases with statistical differences among prediction scores for different classes. Second, based on exit utility scoring and inference budget constraint, the EENet early exit scheduler optimizes the distribution of test samples to different exits and auto-selects the exit utility threshold for each early exit such that the test performance is maximized while the inference budget is satisfied. The design of EENet is model-agnostic and hence, applicable to all pre-trained multi-exit DNN models. In addition, EENet enables flexible splitting of multi-exit DNN models for edge clients with heterogeneous computational resources by running only a partial model until a certain early exit.
We conduct extensive experiments to evaluate EENet with multiple DNN architectures (ResNet (He et al., 2016), DenseNet (Huang et al., 2017), MSDNet (Huang et al., 2018), BERT (Devlin et al., 2019)) on three image classification benchmarks (CIFAR10, CIFAR100, ImageNet) and two NLP benchmarks (SST-2, AgNews). We demonstrate the improvements of EENet in terms of test accuracy under a given inference budget (average latency), compared to existing representative approaches, such as BranchyNet (Teerapittayanon et al., 2016), MSDNet (Huang et al., 2018) and PABEE (Zhou et al., 2020), which has specifically been introduced for NLP tasks. We consider average latency as the budget definition as it usually reflects the inference constraints in real-life applications, compared to most of the existing studies that only analyze #FLOPs. We also provide an ablation study and visual analysis to interpret the behavior of our approach. Lastly, we report the computational cost statistics in space and time in terms of number of parameters and #FLOPs.
Figure 1: Example easy/difficult images from ImageNet on four different classes.
Related Work
BranchyNet (Teerapititayanon et al., 2016) is the first to explore the idea of early exits. It considers the entropy of prediction scores as the measure of confidence and sets the early exit thresholds heuristically. MSDNet (Huang et al., 2018) is the most representative architecture-specific early exit DNN training solution, which uses maximum prediction score instead of entropy as the exit confidence measure. These manually defined confidence measures may be suboptimal, especially in the existence of statistical differences in prediction scores for different classes. Furthermore, these studies only consider the image classification task. Recent studies apply early exiting on NLP tasks. For example, PABEE (Zhou et al., 2020) shows that the manual prediction score-based exit confidence measuring approaches may cause substantial performance drop for NLP tasks. Hence, PABEE proposes an early exit criterion based on the agreement among early exit classifiers, which stops the inference when the number of predictions on the same output reaches a certain patience threshold. However, this method may require having a high number of early exits to produce meaningful scores so that the samples can be separated with a higher resolution for the exit decision. Nevertheless, all these methods introduce manually defined task-specific rules, which do not include any optimization of the early exit policies in terms of scoring function and threshold computation.
Some recent efforts propose other task-dependent confidence measures (Lin et al., 2021; Li et al., 2021) or modifying the training objective to include exit policy learning during the training of a multi-exit DNN (Dai et al., 2020; Chen et al., 2020). EPNet (Dai et al., 2020) proposes to model the problem using Markov decision processes however they add an early exit classifier at each exit to increase the number of states, which is computationally unfeasible since each early exit introduces an additional computational cost during both training and inference, especially for deeper models. (Chen et al., 2020) proposes a variational Bayesian approach to learn when to stop predicting during training. Another drawback of these approaches is to require a larger number of early exits to produce meaningful scores. To the best of our knowledge, EENet is the first to learn optimal early exit policies independent of the multi-exit DNN training process.
## 3 EENet Architecture and Methodology
Given a pre-trained DNN model with \(N\) layers, one can inject multiple exits and finetune the model. Three key questions are (i) what number of exits \(K\) is most suitable, (ii) how to determine which layers \(l_{k}\) to place each exit \(k\), and (iii) which optimization algorithms to use for training. The first two questions remain open, hence the number and location of exits is manually selected in existing work. Although most existing research addresses the question (iii) by developing complex training algorithms, they use manually tuned exit policies during inference. We argue that a model-agnostic approach should focus on lightweight adaptive learning of optimal early exit policies that can be applied to any pre-trained multi-exit model. To this end, we perform the optimization of early exit policies upon the completion of multi-exit model training. In this section, we provide the details of EENet by first explaining the multi-exit model training approach and then describing the model-agnostic adaptive inference optimization methodology.
### Multi-exit Model Training
To enable the given pre-trained DNN classifier to perform early exiting, we first inject early exit classifiers into the model at certain intermediate layers. We set the exit locations \(l_{k}\) following even spacing principle such that \(l_{k}=l_{0}+kL\) for \(k\in\{1,2,\ldots K-1\}\), where \(l_{0}\) is the location of the first exit, \(L=\lfloor\frac{N-l_{0}}{K-1}\rfloor\) and \(l_{K}=N\) is the location of last exit, i.e. the full model. Let \(f\) denote a multi-exit classification model capable of outputting multiple predictions in one forward pass after the injection of early exit subnetworks \(f^{e}_{k}\). The architecture of \(f^{e}_{k}\) should be designed in a way that the additional cost is negligible compared to the full model. Therefore, we employ 3-layer CNNs and a 1-layer fully-connected layer for image and text classification models respectively.
Let us denote the set of output probability scores of \(f\) for one input sample \(\mathbf{x}\) as \(\{\mathbf{\hat{y}}_{k}\}_{k=1}^{K}\) and the corresponding label as \(y\in\mathcal{C}\), where \(K\) is the number of exits and \(\mathcal{C}=\{1,2,\ldots C\}\) is the set of classes. Here, at each exit \(k\), \(\mathbf{\hat{y}}_{k}=f_{k}(\mathbf{x})=[\ldots\hat{y}_{k,c}\ldots]\in\mathbb{R}^{C}\) is the vector of prediction scores for each class \(c\), where \(f_{k}\triangleq f^{e}_{k}\circ f^{e}_{k}\circ\ldots f^{e}_{1}\) with \(f^{e}_{k}\) the \(k\)th core subnetwork and \(f^{e}_{k}\) the \(k\)th early exit
subnetwork of the multi-exit model \(f\). During the training/finetuning of these models, we minimize the weighted average of cross-entropy losses from each exit: \(\mathcal{L}_{multi\_exit}=\sum_{k=1}^{K}\gamma_{k}\mathcal{L}_{CE_{k}}\), where \(\mathcal{L}_{CE_{k}}\) is the cross-entropy loss and \(\gamma_{k}=\frac{k}{K(K+1)}\) is the loss weight of the \(k\)th exit.
### Adaptive Early-Exit Inference Optimization
Problem Definition:After training the multi-exit classification model \(f\) with \(K\) exits, we can move forward to generate an early exit policy under given budget constraints. To this end, we consider a given average per-sample inference budget \(B\) (in terms of latency, #FLOPs etc.), and the vector of inference costs \(\epsilon\in\mathbb{R}^{K}\) of \(f\) until each exit. On a dataset with \(N\) examples, \(\mathcal{D}=\{(\{\hat{\mathbf{g}}_{n,k}\}_{k=1}^{K},y_{n})\}_{n=1}^{N}\) containing model prediction scores on validation samples and corresponding labels, the goal is to find the optimal exit utility scoring functions (\(\{g_{k}\}_{k=1}^{K}\)) and the thresholds \(\mathbf{t}\in\mathbb{R}^{K}\) that maximizes the accuracy as follows:
\[\mathbf{t},g= \operatorname*{arg\,max}_{\mathbf{t}\in\mathbb{R}^{K},\{g_{k}: \mathcal{D}^{\text{R}}\rightarrow\mathbb{R}\}_{k=1}^{K}}\frac{1}{N}\sum_{n=1} ^{N}\mathbf{1}_{\hat{y}_{n,k_{n}}=y_{n}}, \tag{1}\] \[k_{n}=min\{k|g_{k}(\hat{\mathbf{g}}_{n,k})\geq t_{k}\} \tag{2}\]
while satisfying the given average per-sample inference budget \(B\) such that \(\frac{1}{N}\sum_{n=1}^{N}c_{k_{n}}\leq B\). Here, \(k_{n}\) denotes the minimum exit index where the computed utility score was greater or equal to the threshold of that exit, i.e. the assigned exit for the \(n\)th sample. The pair of exit utility scoring functions (\(g_{1},g_{2}\ldots g_{K}\)) and thresholds (\(t_{1},t_{2},\ldots t_{K}\)) that maximizes the validation accuracy while satisfying the given average budget are then used for early-exit enabled inference.
EENet Architecture:Figure 2 gives an overview of our early exit policy learning architecture for adaptive inference. We solve the problem of optimizing an early exit policy by developing a multi-objective optimization approach with the target variables \(q_{k}\) and \(r_{k}\), representing the correctness of a prediction and exit assignment at the \(k\)th exit such that
\[q_{k}=\begin{cases}1&\text{if}\quad\hat{y}_{k}=y\\ 0&\text{if}\quad\hat{y}_{k}\neq y\end{cases} \tag{3}\] \[r_{k}=\begin{cases}\frac{1}{\sum_{k=1}^{K}q_{k^{\prime}}}&\text{ if} \quad\hat{y}_{k}=y\\ \frac{1}{K}&\text{if}\quad\hat{y}_{k^{\prime}}\neq y\quad\forall k^{\prime}\in \{1\ldots K\}\\ 0&\text{otherwise},\end{cases} \tag{4}\]
Figure 2: System architecture of EENet.
where \(\hat{y}_{k}\triangleq\arg\max_{c\in\mathcal{C}}\hat{y}_{k,c}\). In addition, let us denote the confidence score vector \(\mathbf{a}_{k}\) containing different measures based on maximum score, entropy and voting such that
\[a_{k}^{(max)}=\max_{c\in\mathcal{C}}\hat{y}_{k,c}, \tag{5}\]
\[a_{k}^{(entropy)}=1+\frac{\sum_{c^{\prime}=1}^{C}\hat{y}_{k,c^{\prime}}\log\hat{ y}_{k,c^{\prime}}}{\log C}, \tag{6}\]
\[a_{k}^{(vote)}=\frac{1}{k}\max_{c\in\mathcal{C}}\sum_{k^{\prime}=1}^{k}\mathbf{1}_ {\hat{y}_{k^{\prime}}=c}. \tag{7}\]
At each exit \(k\), using the prediction score vector \(\mathbf{\hat{y}}_{k}\) and the confidence score vector \(\mathbf{a}_{k}\), we compute utility score \(\hat{q}_{k}\) and assignment score \(\hat{r}_{k}\) as follows:
\[\mathbf{s}_{k}=\sigma_{lrelu}(\mathbf{W}_{k}^{(sh)}[\mathbf{\hat{y}}_{k},\mathbf{a}_{k}, \mathbf{b}_{k}]), \tag{8}\]
\[\hat{q}_{k}=g_{k}(\mathbf{\hat{y}}_{k},\mathbf{a}_{k},\mathbf{b}_{k})=\sigma_{sig}( \mathbf{W}_{k}^{(g)}\mathbf{s}_{k}), \tag{9}\]
\[\tilde{r}_{k}=h_{k}(\mathbf{\hat{y}}_{k},\mathbf{a}_{k},\mathbf{b}_{k})=\mathbf{W}_{k}^{(h )}\mathbf{s}_{k},\quad\hat{r}_{k}=\frac{e^{\tilde{r}_{k}}}{\sum_{k^{\prime}=1}^{ K}e^{\hat{r}_{k^{\prime}}}}, \tag{10}\]
where \(\mathbf{b}_{k}=[\hat{q}_{1},\dots\hat{q}_{k-1}]\) for \(k>1\) and an empty vector for \(k=1\). \(\sigma_{lrelu}(x)=\max(0,x)+0.01*\min(0,x)\) is the leaky ReLU activation function. Here, \(\mathbf{W}_{k}^{(g)},\mathbf{W}_{k}^{(h)}\in\mathbb{R}^{1\times D_{h}}\) and \(\mathbf{W}_{k}^{(sh)}\in\mathbb{R}^{D_{h}\times D}\) are the fully-connected layer weights for exit utility score function \(g_{k}\), exit assignment function \(h_{k}\) and the shared part of the network for these functions. Thus, the model weights for \(k\)th exit can be denoted as \(\mathbf{\theta}_{k}^{(g)}=\{\mathbf{W}_{k}^{(g)},\mathbf{W}_{k}^{(sh)}\}\) and \(\mathbf{\theta}_{k}^{(h)}=\{\mathbf{W}_{k}^{(h)},\mathbf{W}_{k}^{(sh)}\}\). Here, \(D=C+k+2\) is the number of input features and \(D_{h}\) is the hidden layer size.
**Optimization:** After computing the target values \(q_{k}\) and \(r_{k}\) representing the correctness of a prediction and exit assignment using Equations equation 3 and equation 4, we compute the multi-objective loss defined as \(\mathcal{L}=\mathcal{L}_{g}+\mathcal{L}_{h}\), where \(\mathcal{L}_{g}\) and \(\mathcal{L}_{h}\) are the losses observed by the exit utility scoring functions \(g_{k}\), and exit assignment estimator functions \(h_{k}\) respectively. We define \(\mathcal{L}_{g}\) as follows:
\[\mathcal{L}_{g}=\frac{1}{K}\sum_{n=1}^{N}\sum_{k=1}^{K}w_{n,k}\ell_{g}(\hat{q }_{n,k},q_{n,k})\text{ such that} \tag{11}\]
\[\ell_{g}(\hat{q}_{k},q_{k})=q_{k}\log(\hat{q}_{k})+(1-q_{k})\log(1-\hat{q}_{k })\text{ and} \tag{12}\]
\[w_{n,k}=\frac{1-\sum_{k^{\prime}=1}^{k-1}\hat{r}_{k^{\prime},n}}{\sum_{n^{ \prime}=1}^{N}(1-\sum_{k^{\prime}=1}^{k-1}\hat{r}_{k^{\prime},n})}, \tag{13}\]
where \(N\) is the number of validation data samples and \(w_{n,k}\) is the loss weight for the \(n\)th sample at the \(k\)th exit. This weighting scheme based on the survival possibilities up to that exit encourages exit score estimator functions to specialize in their respective subset of data. The computation of weights in Equation equation 13 is performed outside the computation graph. Lastly, we define \(\mathcal{L}_{h}=\alpha\mathcal{L}_{budget}+\beta\mathcal{L}_{CE}\) such that
\[\mathcal{L}_{budget}=\frac{1}{B}|B-\frac{1}{N}\sum_{n=1}^{N}\sum_{k=1}^{K}\hat {r}_{n,k}c_{k}|\,\text{ and} \tag{14}\]
\[\mathcal{L}_{CE}=-\frac{1}{NK}\sum_{n=1}^{N}\sum_{k=1}^{K}\log(\hat{r}_{n,k})r_ {n,k}, \tag{15}\]
where \(\alpha,\beta>0\) are the loss weighting parameters and \(r_{k}\) is defined in equation 3.2. We learn \(\{\mathbf{\theta}_{k}^{(g)}\}_{k=1}^{K}\) and \(\{\mathbf{\theta}_{k}^{(h)}\}_{k=1}^{K}\) by minimizing \(\mathcal{L}\) on \(\mathcal{D}=\{(\{\mathbf{\hat{y}}_{n,k}\}_{k=1}^{K},y_{n})\}_{n=1}^{N}\) using stochastic gradient descent. We provide the pseudocode for the optimization of the utility function \(g\) and thresholds \(\mathbf{t}\) in Algorithm 1. Here, given the validation dataset with model predictions/labels, budget and inference costs, we first learn the exit utility scoring and assignment functions by minimizing \(\mathcal{L}\). Then for each exit, we let the most utilizable samples exit until the quota assigned to that exit is full. We set the threshold to the exit utility score of the last exited sample with the lowest score. We also provide Algorithm 2 for early-exit enabled adaptive inference with EENet in Appendix A. During inference, at each exit \(k\), we compute the exit utility score using the optimized exit utility scoring function \(g_{k}\) from the output of Algorithm 1 and stop the inference if the score is above the threshold \(t_{k}\).
```
0:\(\mathcal{D}=\{(\{\hat{\mathbf{y}}_{n,k}\}_{k=1}^{K},y_{n})\}_{n=1}^{N}\) (validation predictions and labels), \(B\) (average inference budget per sample), \(\mathbf{c}\in\mathbb{R}^{K}\) (inference costs per sample until each exit)
0:\(\{\mathbf{g}_{k}\}_{k=1}^{K}\), (exit utility score functions), \(\mathbf{t}\in\mathbb{R}^{K}\) (thresholds)
1: Initialize: \(\mathbf{h}\leftarrow\texttt{zeros}(N)\), \(\mathbf{t}\leftarrow\texttt{ones}(k)*1e8\)
2: Learn \(\{g_{k}\}_{k=1}^{K}\) and \(\{h_{k}\}_{k=1}^{K}\) by minimizing \(\mathcal{L}\) on \(\mathcal{D}\).
3: Compute exit scores: \(\mathbf{C}\triangleq(\hat{q}_{n,k})\in\mathbb{R}^{N\times K}\) using equation 9
4:\(\mathbf{S}=(s_{n,k})\in\mathbb{R}^{N\times K}\leftarrow\texttt{argsort}( \mathbf{C},1)\)
5:for exit index \(k=1\)to\(K\)do
6:\(c\gets 0\)
7: Estimate exit distribution: \(p_{k}\leftarrow\frac{1}{N}\sum_{n=1}^{N}\hat{r}_{n,k}\) using equation 10
8:for sample index \(n=1\)to\(N\)do
9:if\(h_{s_{n,k}}=0\)then
10:\(c\gets c+1\)
11:\(h_{s_{n,k}}\gets 1\)
12:if\(c=\texttt{round}(Np_{k})\)then
13:\(t_{k}\leftarrow\hat{q}_{s_{n,k},k}\)
14:break
15:\(t_{K}\leftarrow-1e8\)
16:return\(\{g_{k}\}_{k=1}^{K}\), \(\mathbf{t}\)
```
**Algorithm 1**Early Exit Inference Optimization Algorithm
## 4 Experiments
We conduct extensive experiments to evaluate EENet and report the performance improvements obtained by EENet for budget-constrained adaptive inference on five benchmarks (CIFAR-10, CIFAR-100, ImageNet, SST-2 and AgNews). We demonstrate that EENet consistently outperforms existing representative multi-exit solutions, such as BranchyNet (Teerapittayanon et al., 2016), MSDNet (Huang et al., 2018) and PABEE (Zhou et al., 2020). Our ablation study and visual analysis further interpret the design features of EENet.
**Datasets and Preprocessing:** In image classification experiments, we work on CIFAR-10/100 (Krizhevsky, 2009) and ImageNet (Deng et al., 2009) datasets. CIFAR-10 and CIFAR-100 contain 50000 train and 10000 test images with 32x32 resolution from 10 and 100 classes respectively. ImageNet contains 1.2 million train and 150000 validation images (used for test) with 224x224 resolution from 1000 classes. We hold out randomly selected 5000 images from CIFAR-10/100 train set and 25000 images from ImageNet train set for validation. We follow the data augmentation techniques applied in (He et al., 2016), zero padding, center cropping, random horizontal flip with 0.5 probability. In text classification experiments, we consider SST-2 (Socher et al., 2013) and AGNews (Zhang et al., 2015) datasets. SST-2 contains 67349 train, 872 validation and 1821 test sentences with positive or negative labels. AGNews contains 120000 train and 7600 test sentences from four classes. We hold randomly selected 5000 sentences for validation. For tokenization, we use the pre-trained tokenizer for BERT model provided by open-source HuggingFace (Wolf et al., 2020).
**Experimental Setup:** We perform experiments with ResNet (He et al., 2016) on CIFAR-10 and with DenseNet121 (Huang et al., 2017) on CIFAR-100. We use the default ResNet settings for 56-layer architecture and insert two evenly spaced early exits at the 18th and 36th layers. For DenseNet, we follow the default settings for 121-layer configuration and insert three early exit layers at the 12th, 36th and 84th layers after transition layers. We train these models using Adam optimizer (Kingma & Ba, 2015) for 150 epochs (first 20 epochs without early exits) and the batch size of 128, with the initial learning rate of 0.1 (decays by 0.1 at 50-th and 100th epochs). On ImageNet dataset, we use MSDNet (Huang et al., 2018) with 35 layers, 4 scales and 32 initial hidden dimensions. We insert four evenly spaced early exits at the 7th, 14th, 21th and 28th layers. Each early exit classifiers consist of three 3x3 convolutional layers with ReLU activations. We use the pre-trained BERT (Devlin et al., 2019) provided by the open-source HuggingFace library (Wolf et al., 2020) on SST-2 and AgNews datasets. We insert three evenly spaced early exits at the 3rd, 6th and 9th layers. Each early exit classifier consists of a fully-connected layer. We finetune the models for 20 epochs using gradient descent with the learning rate of 3e-5 and batch size of 16. For EENet, we set \(D_{h}=0.5D\)
\(\alpha=1e-1\) and \(\beta=1e-3\) for image classification experiments and we optimize the weights using Adam optimizer with the learning rate of \(3e-5\) on validation data. For text classification experiments, we set \(D_{h}=2D\), \(\alpha=1e-2\) and \(\beta=1e-4\), and use the learning rate of \(1e-4\). We use L2 regularization with the weight of 0.01. In all experiments, we stop the training if the loss does not decrease for 50 consecutive epochs on the validation set. For MSDNet (Huang et al., 2018), BranchNet (Teerapittayanon et al., 2016) and PABEE (Zhou et al., 2020), we use maximum score \(a_{k}^{(max)}\), entropy-based \(a_{k}^{(entropy)}\) and agreement-based \(a_{k}^{(vote)}\) scores as provided in equations 5, 6 and 7. To compute the thresholds for these methods, we follow the approach in (Huang et al., 2018) and assume that exit assignment of samples will follow the geometric distribution that satisfies the budget on validation data. Our implementation is on Python 3.7 with PyTorch 1.12 library. Each latency measurement is carried out 100 times on a machine with 8-core 2.9GHz CPU. The extra inference time caused by the computations of equations equation 8 and equation 9 are also included in the reported latency measurements, and the cost is much smaller compared to the cost of the forward pass of the model as shown in Table 2.
**Validation of EENet with Comparison:** We compare EENet with BranchyNet (Teerapittayanon et al., 2016), MSDNet (Huang et al., 2018) and PABEE (Zhou et al., 2020) in terms of the accuracy obtained under different budget constraints. We consider average latency per sample as the budget definition throughout the experiments. To this end, we collect results for each method within the budget range of \(B\in[c_{1},c_{K}]\). Table 1 contains the results on CIFAR-10, CIFAR-100, ImageNet, SST-2 and AgNews datasets and as demonstrated, EENet consistently performs better compared to other early exit approaches. For example, consider CIFAR-100 (row 2), the original pre-trained DenseNet121 with target accuracy of 75.08% has the average inference time of 10.20ms. We set three levels of early-exit inference time budgets: 7ms, 6.75ms, 6ms. For CIFAR-100 under the low budget setting with 6 ms, EENet yields 1.45% higher accuracy compared to MSDNet, the second best performer, and outperforms PABEE by over 9% in accuracy gain, showing that the optimal early exit policy learned by EENet is more efficient compared to the respective hand-tuned early exit policy used in BranchyNet, MSDNet and PABEE. In addition, we observe that EENet achieves greater performance gains as the budget tightens. These observations are consistent over all benchmarks. In NLP experiments, the performance gains brought by EENet are more significant compared to the existing methods, with the improvements ranging from 1% (at high budget) to 15% (at low budget). For example, for SST-2, under the low inference budget of 1.75s, EENet achieves 90.45% accuracy compared to 84.33% by the second best performing multi-exit approach BranchyNet. Under the 2s inference budget, EENet achieves 91.45% accuracy compared to 87.71% by the second best multi-exit approach MSDNet. Similar observations are also found in AgNews, showing EENet offers more consistent and stable performance improvement for all five benchmarks.
**Visual Analysis of Some EENet Design Features:** In Figure 3, we analyze through examples to show why the exit utility scores produced by EENet are more effective in finding optimal early exit
\begin{table}
\begin{tabular}{l|c|c c c c} Dataset, Model and & Average Latency & \multicolumn{4}{c}{Accuracy (\%)} \\ Base Performance & Budget per sample & **BranchyNet** & **MSDNet** & **PABEE** & **EENet** \\ \hline
**CIFAR-10** & 3.50 ms & 93.76 & 93.81 & 93.69 & **93.84** \\ ResNet56 w/ 3 exits & 3.00 ms & 92.57 & 92.79 & 91.85 & **92.90** \\
93.90\% @ 4.70 ms & 2.50 ms & 87.55 & 88.76 & 84.39 & **88.90** \\ \hline
**CIFAR-100** & 7.50 ms & 73.96 & 74.01 & 73.68 & **74.08** \\ DenseNet121 w/ 4 exits & 6.75 ms & 71.65 & 71.99 & 68.10 & **72.75** \\
75.08\% @ @ 10.20 ms & 6.00 ms & 68.13 & 68.70 & 61.13 & **70.15** \\ \hline
**ImageNet** & 1.50 s & 74.10 & 74.13 & 74.05 & **74.18** \\ MSDNet35 w/ 5 exits & 1.25 s & 72.44 & 72.70 & 72.40 & **72.75** \\
74.60\% @ 2.35 s & 1.00 s & 69.32 & 69.76 & 68.13 & **69.88** \\ \hline
**SST-2** & 2.50 s & 90.86 & 91.00 & 90.75 & **92.07** \\ BERT w/ 4 exits & 2.00 s & 87.66 & 87.71 & 86.99 & **91.45** \\
92.36\% @ 3.72 s & 1.75 s & 84.33 & 84.30 & 80.99 & **90.45** \\ \hline
**AgNews** & 2.50 s & 92.95 & 92.98 & 92.57 & **93.84** \\ BERT w/ 4 exits & 2.00 s & 85.58 & 84.93 & 85.22 & **93.75** \\
93.98\% @ 3.72 s & 1.50 s & 75.08 & 73.07 & 74.67 & **90.63** \\ \end{tabular}
\end{table}
Table 1: Image and text classification experiment results in terms of accuracy obtained at different budget levels (average latency per sample).
policy by comparing with the maximum prediction scoring method for early exit used in MSDNet and others in the literature. Randomly selected ten classes are listed on the x-axis sorted by the accuracy achieved using the full model on the corresponding class. From the left figure, using maximum prediction scores to determine exit utility may lead to missing some good early exit opportunities. For example, consider those classes that the predictor model produces relatively low maximum prediction scores (less than 0.7), such as lizard, man, butterfly and fox. Even though the predictor can predict them correctly but the relative confidence is not very high. In comparison, EENet defines the exit utility scoring by combining three quality measures (entropy, maximum prediction confidence and voting). Hence, the exit utility scores obtained by EENet reflect the easiness of test examples more accurately. For example, those classes that have lower maximum prediction scores on the true predictions, such as lizard, man, butterfly and fox, will have high early exit utility scores in EENet as shown in the right figure highlighted in the blue oval. Similarly, the classes that make the false prediction with high maximum confidence, such as castle, highlighted in the lower right corner, will have low exit utility score in EENet. This shows one of the novel features of EENet for learning optimal early exit policies by leveraging high-quality exit-utility function and ranking under given accuracy and inference latency budget constraints. Further analysis on CIFAR-100 and AgNews are provided in Section B.
Figure 4: Visual comparison of the early exit approaches on CIFAR-100 test data with DenseNet121 (4 exits) for the average latency budget of 6 ms. We illustrate the randomly selected nine samples from three classes and the exit location that they were assigned. Images with green/red borders are predicted correctly/incorrectly at the corresponding exit. We also report the number of correct predictions and exited samples at each exit. In this case, EENet obtains the performance gain by allowing more samples to exit at the second exit.
Figure 3: Analysis on the benefit of exit utility scores obtained by EENet, which provides a clearer separation of true and false predictions for all classes, compared to maximum prediction score based confidence, which is popularly used in the literature. Images with blue/red borders are predicted correctly/incorrectly at the first exit of DenseNet121.
Figure 4 provides a visual comparison of EENet (right) with MSDNet (left) and BranchyNet (middle) on CIFAR-100 test data, with four exits of the respective early exit models. The visual comparison is using the average latency budget of 6 ms (recall Table 1). We use the randomly selected nine examples from three classes in the test set and display the exit location that they were assigned by EENet (right) and by MSDNet (left). Images with green/red borders are predicted correctly/incorrectly at the corresponding exit. We also report the number of correct predictions and the number of exited text examples at each exit. In this case, EENet obtains the performance gain over MSDNet by allowing more correct predictions to exit earlier at the second exit.
**Ablation Study:** We also analyze the effect of different components in EENet on performance by in-depth investigation of the results for SST-2 and AgNews. Figure 5 provides the plots of average inference time vs. accuracy for two additional variants of EENet and compares with MSDNet and BranchyNet. The first variant of EENet shows the results of EENet without optimizing the exit utility scoring, and instead, directly using maximum prediction scores. The second variant shows the results of EENet without optimizing exit distributions through our budget-constrained learning, and instead, directly using geometric distribution. For both SST-2 and AgNews, EENet outperform its two variants. All three versions of EENet outperforms MSDNet and BranchyNet.
**Flexible Deployment on Heterogeneous Edge Clients:** EENet by design provides model-agnostic adaptive early exit inference, applicable to all pre-trained early exit models. Another EENet design goal is to enable flexible NN splitting by early exits, enabling edge clients with limited resources to benefit from early exit models. Table 2 reports the number of parameters (#PRMs) and number of floating point operations (#FLOPs) of the models used in the experiments until each exit for four multi-exit DNNs. For each model, we also provide the additional computational cost of EENet in employing the budgeted adaptive early exit policy. First, the increase of #FLOPs caused by EENet is negligible (\(<0.5\%\)) compared to the cost of the forward pass of the original pre-trained DNN model. For the application scenarios with hard constraints (storage/RAM limitations) for edge deployment, the partial multi-exit model with EENet split at a certain exit can be delivered, with the partial model size meeting the edge deployment constraints. For this subnetwork with EENet, those test examples with exit utility score below the learned exit threshold will be passed to the next level edge server in the hierarchical edge computing infrastructure, which has higher computational capacity to continue the multi-exit inference.
## 5 Conclusion
We have presented EENet, a novel, lightweight and model-agnostic early exit policy optimization framework for budgeted adaptive inference. The paper makes a number of original contributions. First, our approach introduces early exit utility scoring function which combines a set of complimentary early exit confidence measures and class-wise prediction scores. Second, we optimize the assignment of test data samples to different exits by learning the optimal early exit distribution and the adaptive thresholds for test-time early exit scheduling. As opposed to previous manually defined heuristics-based early exit techniques, which may be suboptimal based on the specific multi-exit DNN architecture, our approach is model-agnostic and can easily be used in different learning tasks with pre-trained DNN models in vision and NLP applications. Extensive experiments on five benchmarks (CIFAR-10, CIFAR-100, ImageNet, SST-2, AgNews) demonstrate that EENet consistently
Figure 5: Average latency (ms) vs Accuracy (%) results at SST-2 and AgNews datasets for BranchyNet, MSDNet and EENet variations (without distribution/scoring optimization).
outperforms the existing representative techniques represented by MSDNet, BranchyNet, PABEE, and the performance improvements get more significant as the given average latency budget per sample tightens. Lastly, our ablation study and visual analysis further demonstrate the effects of core components of EENet design.
## References
* Chen et al. (2020) Xinshi Chen, Hanjun Dai, Yu Li, Xin Gao, and Le Song. Learning to stop while learning to predict. In _ICML_, pp. 1520-1530, 2020. URL [http://proceedings.mlr.press/v119/chen20c.html](http://proceedings.mlr.press/v119/chen20c.html).
* Dai et al. (2020) Xin Dai, Xiangnan Kong, and Tian Guo. Epnet: Learning to exit with flexible multi-branch network. In _Proceedings of the 29th ACM International Conference on Information; Knowledge Management_, CIKM '20, pp. 235-244, New York, NY, USA, 2020. Association for Computing Machinery. ISBN 9781450368599. doi: 10.1145/3340531.3411973. URL [https://doi.org/10.1145/3340531.3411973](https://doi.org/10.1145/3340531.3411973).
* Deng et al. (2009) Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In _2009 IEEE Conference on Computer Vision and Pattern Recognition_, pp. 248-255, 2009. doi: 10.1109/CVPR.2009.5206848.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_, pp. 4171-4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL [https://aclanthology.org/N19-1423](https://aclanthology.org/N19-1423).
* Elbayad et al. (2020) Maha Elbayad, Jiatao Gu, Edouard Grave, and Michael Auli. Depth-adaptive transformer. In _International Conference on Learning Representations_, 2020. URL [https://openreview.net/forum?id=SJg7KhVKPH](https://openreview.net/forum?id=SJg7KhVKPH).
* Gholami et al. (2022) Asghar Gholami, Sehoon Kim, Dong Zhen, Zhewei Yao, Michael Mahoney, and Kurt Keutzer. _A Survey of Quantization Methods for Efficient Neural Network Inference_, pp. 291-326. 01 2022. ISBN 9781003162810. doi: 10.1201/9781003162810-13.
* Ghosh et al. (2022) Sayan Ghosh, Karthik Prasad, Xiaoliang Dai, Peizhao Zhang, Bichen Wu, Graham Cormode, and Peter Vajda. Pruning compact convnets for efficient inference, 2022. URL [https://openreview.net/forum?id=_g28dG4vOr9](https://openreview.net/forum?id=_g28dG4vOr9).
* Goodfellow et al. (2016) Ian Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio. _Deep learning_, volume 1. MIT Press, 2016.
* He et al. (2016) Kaiming He, X. Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. _2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, pp. 770-778, 2016.
* Hinton et al. (2015) Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network, 2015. URL [https://arxiv.org/abs/1503.02531](https://arxiv.org/abs/1503.02531).
* Hinton et al. (2015)
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c|}{**Exit-1**} & \multicolumn{2}{c|}{**Exit-2**} & \multicolumn{2}{c|}{**Exit-3**} & \multicolumn{2}{c|}{**Exit-4**} & \multicolumn{2}{c|}{**Exit-5**} & \multicolumn{2}{c}{**Base Model**} \\ & \#PRMs & Latency & \#PRMs & Latency & \#PRMs & Latency & \#PRMs & Latency & \#PRMs & Latency & \#PRMs & Latency \\ \hline
**ResNet56** & 0.06M & 2.31ms & 0.28M & 4.15ms & 0.96M & 4.93ms & - & - & - & 0.86M & 4.77ms \\ (w/ENeNet) & (\(\sim\)0.07K) & (\(\sim\)0.01ms) & (\(\sim\)0.00K) & (\(\sim\)0.10ms) & (\(\sim\)0.11K) & (\(\sim\)0.01ms) & - & - & - & - & - \\ \hline
**DenseNet121** & 0.06M & 2.49ms & 0.25M & 5.30ms & 0.86M & 9.53ms & 1.17M & 10.20ms & - & - & 1.04M & 10.03ms \\ (w/ENeNet) & (\(\sim\)5.25K) & (\(\sim\)0.08ms) & (\(\sim\)5.36K) & (0.08ms) & (\(\sim\)5.47K) & (\(\sim\)0.08ms) & (\(\sim\)5.7K) & (\(\sim\)0.08ms) & - & - & - \\ \hline
**MSDNet35** & 8.76M & 0.71s & 20.15M & 1.48s & 31.73M & 1.86s & 41.86M & 2.13s & 62.31M & 2.34s & 58.70M & 2.27s \\ (w/ENeNet) & (\(\sim\)0.25M) & (\(\sim\)0.63ms) & (\(\sim\)0.25M) & (\(\sim\)0.63ms) & (\(\sim\)0.25M) & (\(\sim\)0.25M) & (\(\sim\)0.63ms) & (\(\sim\)0.25M) & (\(\sim\)0.63ms) & - & - \\ \hline
**BERT** & 4.69M & 1.00s & 67.55M & 1.83s & 89.40M & 2.91s & 111.26M & 3.72s & - & - & 109.90M & 3.67s \\ (w/ENeNet) & (\(\sim\)2.000) & (\(\sim\)0.01nm) & (\(\sim\)200) & (\(\sim\)0.01nm) & (\(\sim\)200) & (\(\sim\)0.01nm) & (\(\sim\)200) & (\(\sim\)0.01nm) & - & - & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 2: Model statistics in terms of number of parameters (#PRMs) and the number of floating point operations (Latency) until each exit and the base model configuration without early exits. The cost associated with EENet is also provided in parentheses.
Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q. Weinberger. Densely connected convolutional networks. In _2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, pp. 2261-2269, 2017. doi: 10.1109/CVPR.2017.243.
* Huang et al. (2018) Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, and Kilian Weinberger. Multi-scale dense networks for resource efficient image classification. In _International Conference on Learning Representations_, 2018. URL [https://openreview.net/forum?id=Hk2aImxAb](https://openreview.net/forum?id=Hk2aImxAb).
* Kingma and Ba (2015) Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun (eds.), _3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings_, 2015. URL [http://arxiv.org/abs/1412.6980](http://arxiv.org/abs/1412.6980).
* Krizhevsky (2009) Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009.
* Laskaridis et al. (2021) Stefanos Laskaridis, Alexandros Kouris, and Nicholas D. Lane. Adaptive inference through early-exit networks: Design, challenges and directions. In _Proceedings of the 5th International Workshop on Embedded and Mobile Deep Learning_, EMDL'21, pp. 1-6, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450385978. doi: 10.1145/3469116.3470012. URL [https://doi.org/10.1145/3469116.3470012](https://doi.org/10.1145/3469116.3470012).
* Li et al. (2019) Hao Li, Hong Zhang, Xiaojuan Qi, Ruigang Yang, and Gao Huang. Improved techniques for training adaptive deep networks. _2019 IEEE/CVF International Conference on Computer Vision (ICCV)_, pp. 1891-1900, 2019.
* Li et al. (2021) Shuang Li, Jinming Zhang, Wenxuan Ma, Chi Harold Liu, and Wei Li. Dynamic domain adaptation for efficient inference. In _IEEE Conference on Computer Vision and Pattern Recognition_, 2021.
* Lin et al. (2021) Ziqian Lin, Sreya Dutta Roy, and Yixuan Li. Mood: Multi-level out-of-distribution detection. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2021.
* Phuong and Lampert (2019) Mary Phuong and Christoph Lampert. Distillation-based training for multi-exit architectures. In _2019 IEEE/CVF International Conference on Computer Vision (ICCV)_, pp. 1355-1364, 2019. doi: 10.1109/ICCV.2019.00144.
* Socher et al. (2013) Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In _Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing_, pp. 1631-1642, Seattle, Washington, USA, October 2013. Association for Computational Linguistics. URL [https://aclanthology.org/D13-1170](https://aclanthology.org/D13-1170).
* Teerapittayanon et al. (2016) Surat Teerapittayanon, Bradley McDanel, and H. T. Kung. Branchynet: Fast inference via early exiting from deep neural networks. _2016 23rd International Conference on Pattern Recognition (ICPR)_, pp. 2464-2469, 2016.
* Teerapittayanon et al. (2017) Surat Teerapittayanon, Bradley McDanel, and H.T. Kung. Distributed deep neural networks over the cloud, the edge and end devices. In _2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS)_, pp. 328-339, 2017. doi: 10.1109/ICDCS.2017.226.
* Veniat and Denoyer (2018) T. Veniat and L. Denoyer. Learning time/memory-efficient deep architectures with budgeted super networks. In _2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pp. 3492-3500, Los Alamitos, CA, USA, jun 2018. IEEE Computer Society. doi: 10.1109/CVPR.2018.00368. URL [https://doi.ieeecomputersociety.org/10.1109/CVPR.2018.00368](https://doi.ieeecomputersociety.org/10.1109/CVPR.2018.00368).
* Wolf et al. (2020) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural language processing. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_, pp. 38-45, Online, October 2020. Association for Computational Linguistics. URL [https://www.aclweb.org/anthology/2020.emnlp-demos.6](https://www.aclweb.org/anthology/2020.emnlp-demos.6).
* Yang et al. (2020) Le Yang, Yizeng Han, Xi Chen, Shiji Song, Jifeng Dai, and Gao Huang. Resolution adaptive networks for efficient inference. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, 2020.
* Zhang et al. (2015) Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text classification. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett (eds.), _Advances in Neural Information Processing Systems_, volume 28. Curran Associates, Inc., 2015. URL [https://proceedings.neurips.cc/paper/2015/file/250cf8b51c773f3f8dc8b4be867a9a02-Paper.pdf](https://proceedings.neurips.cc/paper/2015/file/250cf8b51c773f3f8dc8b4be867a9a02-Paper.pdf).
* Zhou et al. (2020) Wangchunshu Zhou, Canwen Xu, Tao Ge, Julian McAuley, Ke Xu, and Furu Wei. Bert loses patience: Fast and robust inference with early exit. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), _Advances in Neural Information Processing Systems_, volume 33, pp. 18330-18341. Curran Associates, Inc., 2020. URL [https://proceedings.neurips.cc/paper/2020/file/d4ddll1a4fd973394238aca5c05bebe3-Paper.pdf](https://proceedings.neurips.cc/paper/2020/file/d4ddll1a4fd973394238aca5c05bebe3-Paper.pdf).
Adaptive Inference Algorithm with Early Exits
```
1:Inputs: \(\boldsymbol{x}\) (test input sample), \(\{f_{k}\}_{k=1}^{K}\) (predictor functions), \(\{g_{k}\}_{k=1}^{K}\) (early exit utility score estimator functions), \(\boldsymbol{t}\) (early exit thresholds)
2:Output: \(\hat{y}\) (predicted label)
3:for exit index \(k=1\)to\(K\)do
4: Obtain predictor model output: \(\hat{\boldsymbol{y}}_{k}\gets f_{k}(\boldsymbol{x})\)
5: Compute exit score \(\hat{q}_{k}\) with \(g_{k}\) using equation 9
6:if\(\hat{q}_{k}\geq t_{k}\)then
7:\(\hat{y}_{k}\leftarrow\arg\max\hat{\boldsymbol{y}}_{k}\)
8:return\(\hat{y}_{k}\)
9:return\(\hat{y}_{K}\)
```
**Algorithm 2**Early Exit Inference Algorithm
## Appendix B Exit Distribution Behavior Analysis on CIFAR-100 and AgNews
Here, we visualize the exit distribution behavior of EENet on CIFAR-100 under different average latency budget levels (6ms, 6.5ms, 7ms, 8ms). In the scatter plots provided in Figures 6, 7, 8 and 9, at each exit, we plot the validation samples with x-axis the class of the sample and y-axis the exit utility score. Green/red color represents correct/incorrect predictions at the corresponding exit whereas yellow cross marker is used to indicate that the sample has already exited. Computed thresholds are drawn using horizontal blue lines and the percentages of exiting samples are provided in subplot titles. We also analyze the results on AgNews by comparing the exit assignments by MSDNet, BranchyNet and EENet in Figure 10.
Figure 7: Distribution of samples to different exits under the average latency budget of 6.5 milliseconds.
Figure 8: Distribution of samples to different exits under the average latency budget of 7 milliseconds.
Figure 9: Distribution of samples to different exits under the average latency budget of 8 milliseconds. |
2303.14844 | Analyzing Convergence in Quantum Neural Networks: Deviations from Neural
Tangent Kernels | A quantum neural network (QNN) is a parameterized mapping efficiently
implementable on near-term Noisy Intermediate-Scale Quantum (NISQ) computers.
It can be used for supervised learning when combined with classical
gradient-based optimizers. Despite the existing empirical and theoretical
investigations, the convergence of QNN training is not fully understood.
Inspired by the success of the neural tangent kernels (NTKs) in probing into
the dynamics of classical neural networks, a recent line of works proposes to
study over-parameterized QNNs by examining a quantum version of tangent
kernels. In this work, we study the dynamics of QNNs and show that contrary to
popular belief it is qualitatively different from that of any kernel
regression: due to the unitarity of quantum operations, there is a
non-negligible deviation from the tangent kernel regression derived at the
random initialization. As a result of the deviation, we prove the at-most
sublinear convergence for QNNs with Pauli measurements, which is beyond the
explanatory power of any kernel regression dynamics. We then present the actual
dynamics of QNNs in the limit of over-parameterization. The new dynamics
capture the change of convergence rate during training and implies that the
range of measurements is crucial to the fast QNN convergence. | Xuchen You, Shouvanik Chakrabarti, Boyang Chen, Xiaodi Wu | 2023-03-26T22:58:06Z | http://arxiv.org/abs/2303.14844v1 | # Analyzing Convergence in Quantum Neural Networks: Deviations from Neural Tangent Kernels
###### Abstract
A quantum neural network (QNN) is a parameterized mapping efficiently implementable on near-term Noisy Intermediate-Scale Quantum (NISQ) computers. It can be used for supervised learning when combined with classical gradient-based optimizers. Despite the existing empirical and theoretical investigations, the convergence of QNN training is not fully understood. Inspired by the success of the neural tangent kernels (NTKs) in probing into the dynamics of classical neural networks, a recent line of works proposes to study over-parameterized QNNs by examining a quantum version of tangent kernels. In this work, we study the dynamics of QNNs and show that contrary to popular belief it is qualitatively different from that of any kernel regression: due to the unitarity of quantum operations, there is a non-negligible deviation from the tangent kernel regression derived at the random initialization. As a result of the deviation, we prove the at-most sublinear convergence for QNNs with Pauli measurements, which is beyond the explanatory power of any kernel regression dynamics. We then present the actual dynamics of QNNs in the limit of over-parameterization. The new dynamics capture the change of convergence rate during training, and implies that the range of measurements is crucial to the fast QNN convergence.
## 1 Introduction
Analogous to the classical logic gates, quantum gates are the basic building blocks for quantum computing. A variational quantum circuit (also referred to as an ansatz) is composed of parameterized quantum gates. A quantum neural network (QNN) is nothing
but an instantiation of learning with parametric models using variational quantum circuits and quantum measurements: A \(p\)-parameter \(d\)-dimensional QNN for a dataset \(\{\mathbf{x}_{i},y_{i}\}\) is specified by an encoding \(\mathbf{x}_{i}\mapsto\boldsymbol{\rho}_{i}\) of the feature vectors into quantum states in an underlying \(d\)-dimensional Hilbert space \(\mathcal{H}\), a variational circuit \(\mathbf{U}(\boldsymbol{\theta})\) with real parameters \(\boldsymbol{\theta}\in\mathbb{R}^{p}\), and a quantum measurement \(\mathbf{M}_{0}\). The predicted output \(\hat{y}_{i}\) is obtained by measuring \(\mathbf{M}_{0}\) on the output \(\mathbf{U}(\boldsymbol{\theta})\boldsymbol{\rho}_{i}\mathbf{U}^{\dagger}( \boldsymbol{\theta})\). Like deep neural networks, the parameters \(\boldsymbol{\theta}\) in the variational circuits are optimized by gradient-based methods to minimize an objective function that measures the misalignments of the predicted outputs and the ground truth labels.
With the recent development of quantum technology, the near-term Noisy Intermediate-Scale Quantum (NISQ) (Preskill, 2018) computer has become an important platform for demonstrating quantum advantage with practical applications. As a hybrid of classical optimizers and quantum representations, QNNs is a promising candidate for demonstrating such advantage on quantum computers available to us in the near future: quantum machine learning models are proved to have a margin over the classical counterparts in terms of the expressive power due the to the exponentially large Hilbert space of quantum states (Huang et al., 2021; Anschuetz, 2022). On the other hand by delegating the optimization procedures to classical computers, the hybrid method requires significantly less quantum resources, which is crucial for readily available quantum computers with limited coherence time and error correction. There have been proposals of QNNs (Dunjko and Briegel, 2018; Schuld and Killoran, 2019) for classification (Farhi et al., 2020; Romero et al., 2017) and generative learning (Lloyd and Weedbrook, 2018; Zoufal et al., 2019; Chakrabarti et al., 2019).
Despite their potential there are challenges in the practical deployment of QNNs. Most notably, the optimization problem for training QNNs can be highly non-convex. The landscape of QNN training may be swarmed with spurious local minima and saddle points that can trap gradient-based optimization methods (You and Wu, 2021; Anschuetz and Kiani, 2022). QNNs with large dimensions also suffer from a phenomenon called the _barren plateau_(McClean et al., 2018), where the gradients of the parameters vanish at random initializations, making convergence slow even in a trap-free landscape. These difficulties in training QNNs, together with the challenge of classically simulating QNNs at a decent scale, calls for a theoretical understanding of the convergence of QNNs.
Neural Tangent KernelsMany of the theoretical difficulties in understanding QNNs have also been encountered in the study of classical deep neural networks: despite the landscape of neural networks being non-convex and susceptible to spurious local minima and saddle points, it has been empirically observed that the training errors decays exponentially in the training time (Livni et al., 2014; Arora et al., 2019) in the highly _over-parameterized_ regime with sufficiently many number of trainable parameters. This phenomenon is theoretically explained by connecting the training dynamics of neural networks to the kernel regression: the kernel regression model generalizes the linear regression by equipping the linear model with non-linear feature maps. Given a training set \(\{\mathbf{x}_{j},y_{j}\}_{j=1}^{m}\subset\mathcal{X}\times\mathcal{Y}\) and a non-linear feature map \(\phi:\mathcal{X}\rightarrow\mathcal{X}^{\prime}\) mapping the features to a potentially high-dimensional feature space \(\mathcal{X}^{\prime}\). The kernel regression solves for the optimal weight \(\mathbf{w}\) that minimizes the mean-square loss \(\frac{1}{2m}\sum_{j=1}^{m}(\mathbf{w}^{T}\phi(\mathbf{x}_{j})-y_{j})^{2}\). The name of kernel regression stems from the fact that the optimal hypothesis \(\mathbf{w}\) depends on the high-dimensional feature vectors \(\{\phi(\mathbf{x}_{j})\}_{j=1}^{m}\) through a
\(m\times m\)_kernel_ matrix \(\mathbf{K}\), such that \(K_{ij}=\phi(\mathbf{x}_{i})^{T}\phi(\mathbf{x}_{j})\). The kernel regression enjoys a linear convergence (i.e. the mean square loss decaying exponentially over time) when \(\mathbf{K}\) is positive definite.
The kernel matrix associated with a neural network is determined by tracking how the predictions for each training sample evolve jointly at random initialization. The study of the neural network convergence then reduces to characterizing the corresponding kernel matrices (the neural tangent kernel, or the NTK). In addition to the convergence results, NTK also serves as a tool for studying other aspect of neural networks including generalization (Canatar et al., 2021; Chen et al., 2020) and stability (Bietti and Mairal, 2019).
The key observation that justifies the study of neural networks with neural tangent kernels, is that the NTK becomes a constant (over time) during training in the limit of infinite layer widths. This has been theoretically established starting with the analysis of wide fully-connected neural networks (Jacot et al., 2018; Arora et al., 2019; Chizat et al., 2019) and later generalized to a variety of architectures (e.g. Allen-Zhu et al. (2019)).
Quantum NTKsInspired by the success of NTKs, recent years have witnessed multiple works attempting to associate over-parameterized QNNs to kernel regression. Along the line there are two types of studies. The first category investigates and compares the properties of the "quantum" kernel induced by the quantum encoding of classical features, where \(K_{ij}\) associated with the \(i\)-th and \(j\)-th feature vectors \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) equals \(\mathrm{tr}(\boldsymbol{\rho}_{i}\boldsymbol{\rho}_{\mathrm{j}})\) with \(\boldsymbol{\rho}_{i}\) and \(\boldsymbol{\rho}_{j}\) being the quantum state encodings, without referring to the dynamics of training (Schuld and Killoran, 2019; Huang et al., 2021; Liu et al., 2022b). The second category seeks to directly establish the quantum version of NTK for QNNs by examining the evolution of the model predictions at random initialization, which is the recipe for calculating the classical NTK in Arora et al. (2019): Shirai et al. (2021) empirically evaluates the direct training of the quantum NTK instead of the original QNN formulation. On the other hand, by analyzing the time derivative of the quantum NTK at initialization, Liu et al. (2022a) conjectures that in the limit of over-parameterization, the quantum NTK is a constant over time and therefore the dynamics reduces to a kernel regression.
Despite recent efforts, a rigorous answer remains evasive whether the quantum NTK is a constant during training for over-parameterized QNNs. We show that the answer to this question is indeed, surprisingly negative: as a result of the unitarity of quantum circuits, there is a finite change in the conjectured quantum NTK as the training error decreases, even in the the limit of over-parameterization.
ContributionsIn this work, we focus on QNNs equipped with the mean square loss, trained using gradient flow, following Arora et al. (2019). In Section 3, we show that, despite the formal resemblance to kernel regression dynamics, the over-parameterized QNN does not follow the dynamics of _any_ kernel regression due to the unitarity: for the widely-considered setting of classifications with Pauli measurements, we show that the objective function at time \(t\) decays at most as a polynomial function of \(1/t\) (Theorem 3.2). This contradicts the dynamics of any kernel regression with a positive definite kernel, which exhibits convergence with \(L(t)\leq L(0)\exp(-ct)\) for some positive constant \(c\). We also identify the true asymptotic dynamics of QNN training as regression with a time-varying Gram matrix \(\mathbf{K}_{\text{asym}}\) (Lemma 4.1),
and show rigorously that the real dynamics concentrates to the asymptotic one in the limit \(p\to\infty\) (Theorem 4.2). This reduces the problem of investigating QNN convergence to studying the convergence of the asymptotic dynamics governed by \(\mathbf{K}_{\mathsf{asym}}\).
We also consider a model of QNNs where the final measurement is post-processed by a linear scaling. In this setting, we provide a complete analysis of the convergence of the asymptotic dynamics in the case of 1 training sample (Corollary 4.3), and provide further theoretical evidence of convergence in the neighborhood of most global minima when the number of samples \(m>1\) (Theorem 4.4). These theoretical evidences are supplemented with an empirical study that demonstrates in generality, the convergence of the asymptotic dynamics when \(m\geq 1\). Coupled with our proof of convergence, these form the strongest concrete evidences of the convergence of training for over-parameterized QNNs.
Connections to previous worksOur result extends the existing literature on QNN landscapes (e.g. Anschuetz (2022), Russell et al. (2017)) and looks into the training dynamics, which allows us to characterize the rate of convergence and to show how the range of the measurements affects the convergence to global minima. The dynamics for over-parameterized QNNs proposed by us can be reconciled with the existing calculations of quantum NTK as follows: in the regime of over-parameterization, the QNN dynamics coincides with the quantum NTK dynamics conjectured in Liu et al. (2022) at random initialization; yet it deviates from quantum NTK dynamics during training, and the deviation does not vanish in the limit of \(p\to\infty\).
## 2 Preliminaries
Empirical risk minimization (ERM)A supervised learning problem is specified by a joint distribution \(\mathcal{D}\) over the feature space \(\mathcal{X}\) and the label space \(\mathcal{Y}\), and a family \(\mathcal{F}\) of mappings from \(\mathcal{X}\) to \(\mathcal{Y}\) (i.e. the hypothesis set). The goal is to find an \(f\in\mathcal{F}\) that well predicts the label \(y\) given the feature \(\mathbf{x}\) in expectation, for pairs of \((\mathbf{x},y)\in\mathcal{X}\times\mathcal{Y}\) drawn \(i.i.d.\) from the distribution \(\mathcal{D}\).
Given a training set \(\mathcal{S}=\{\mathbf{x}_{j},y_{j}\}_{j=1}^{m}\) composed of \(m\) pairs of features and labels, we search for the optimal \(f\in\mathcal{F}\) by the _empirical risk minimization_ (ERM): let \(\ell\) be a loss function \(\ell:\mathcal{Y}\times\mathcal{Y}\to\mathbb{R}\), ERM finds an \(f\in\mathcal{F}\) that minimizes the average loss: \(\min_{f\in\mathcal{F}}\frac{1}{m}\sum_{i=1}^{m}\ell(\hat{y}_{i},y_{i}),\; \text{where}\;\hat{y}_{i}=f(\mathbf{x}_{i})\). We focus on the common choice of the _square loss_\(\ell(\hat{y},y)=\frac{1}{2}(\hat{y}-y)^{2}\).
Classical neural networksA popular choice of the hypothesis set \(\mathcal{F}\) in modern-day machine learning is the _classical neural networks_. A vanilla version of the \(L\)-layer feed-forward neural network takes the form \(f(x;\mathbf{W}_{1},\cdots,\mathbf{W}_{L})=\mathbf{W}_{L}\sigma(\cdots\mathbf{ W}_{2}\sigma(\mathbf{W}_{1}\sigma(x))\cdots)\), where \(\sigma(\cdot)\) is a non-linear activation function, and for all \(l\in[L]\), \(\mathbf{W}_{l}\in\mathbb{R}^{d_{l}\times d_{l-1}}\) is the weights in the \(l\)-th layer, with \(d_{L}=1\) and \(d_{0}\) the same as the dimension of the feature space \(\mathcal{X}\). It has been shown that, in the limit \(\min_{l=1}^{L-1}d_{l}\to\infty\), the training of neural networks with square loss is close to kernel learning, and therefore enjoys a linear convergence rate (Jacot et al., 2018; Arora et al., 2019; Allen-Zhu et al., 2019; Oymak and Soltanolkotabi, 2020).
Quantum neural networksQuantum neural networks is a family of parameterized hypothesis set analogous to its classical counterpart. At a high level, it has the layered-structure like a classical neural network. At each layer, a linear transformation acts on the output from the last layer. A quantum neural network is different from its classical counterpart in the following three aspects.
(1) Quantum states as inputsA \(d\)-dimensional quantum state is represented by a _density matrix_\(\mathbf{\rho}\), which is a positive semidefinite \(d\times d\) Hermitian with trace 1. A state is said to be pure if \(\mathbf{\rho}\) is rank-1. Pure states can therefore be equivalently represented by a state vector \(\mathbf{v}\) such that \(\mathbf{\rho}=\mathbf{vv}^{\dagger}\). The inputs to QNNs are quantum states. They can either be drawn as samples from a quantum-physical problem or be the encodings of classical feature vectors.
(2) ParameterizationIn classical neural networks, each layer is composed of a linear transformation and a non-linear activation, and the matrix associated with the linear transformation can be directly optimized at each entry. In QNNs, the entries of each linear transformation can not be directly manipulated. Instead we update parameters in a variational ansatz to update the linear transformations. More concretely, a general \(p\)-parameter ansatz \(\mathbf{U}(\mathbf{\theta})\) in a \(d\)-dimensional Hilbert space can be specified by a set of \(d\times d\) unitaries \(\{\mathbf{U}_{0},\mathbf{U}_{1},\cdots,\mathbf{U}_{p}\}\) and a set of non-zero \(d\times d\) Hermitians \(\{\mathbf{H}^{(1)},\mathbf{H}^{(2)},\cdots,\mathbf{H}^{(p)}\}\) as
\[\mathbf{U}_{p}\exp(-i\theta_{p}\mathbf{H}^{(p)})\mathbf{U}_{p-1} \exp(-i\theta_{p-1}\mathbf{H}^{(p-1)})\cdots\exp(-i\theta_{2}\mathbf{H}^{(2)}) \mathbf{U}_{1}\exp(-i\theta_{1}\mathbf{H}^{(1)})\mathbf{U}_{0}. \tag{1}\]
Without loss of generality, we assume that \(\text{tr}(\mathbf{H}^{(l)})=0\). This is because adding a Hermitian proportional to \(\mathbf{I}\) on the generator \(\mathbf{H}^{(l)}\) does not change the density matrix of the output states. Notice that most \(p\)-parameter ansatze \(\mathbf{U}:\mathbb{R}^{p}\rightarrow\mathbb{C}^{d\times d}\) can be expressed as Equation 1. One exception may be the anastz design with intermediate measurements (e.g. Cong et al. (2019)). In Section 4, we will also consider the periodic anastz:
**Definition 1** (Periodic ansatz).: A \(d\)-dimensional \(p\)-parameter periodic anasatz \(\mathbf{U}(\mathbf{\theta})\) is defined as
\[\mathbf{U}_{p}\exp(-i\theta_{p}\mathbf{H})\cdot\cdots\cdot\mathbf{U}_{1}\exp( -i\theta_{1}\mathbf{H})\mathbf{U}_{0}, \tag{2}\]
where \(\mathbf{U}_{l}\) are sampled \(i.i.d.\) with respect to the Haar measure over the special unitary group \(SU(d)\), and \(\mathbf{H}\) is a non-zero trace-0 Hermitian.
Up to a unitary transformation, the periodic ansatz is equivalent to an ansatz in Line (1) where \(\{\mathbf{H}^{(l)}\}_{l=1}^{p}\) sampled as \(\mathbf{V}_{l}\mathbf{HV}_{l}^{\dagger}\) with \(\mathbf{V}_{l}\) being haar random \(d\times d\) unitary matrices. Similar ansatze have been considered in McClean et al. (2018); Anschuetz (2022); You and Wu (2021); You et al. (2022).
(3) Readout with measurementsContrary to classical neural networks, the readout from a QNN requires performing quantum _measurements_. A measurement is specified by a Hermitian \(\mathbf{M}\). The outcome of measuring a quantum state \(\mathbf{\rho}\) with a measurement \(\mathbf{M}\) is \(\text{tr}(\mathbf{\rho}\mathbf{M})\), which is a linear function of \(\mathbf{\rho}\). A common choice is the Pauli measurement: Pauli
matrices are \(2\times 2\) Hermitians that are also unitary. The Pauli measurements are tensor products of Pauli matrices, featuring eigenvalues of \(\pm 1\).
A common choice is the Pauli measurement: Pauli matrices are \(2\times 2\) Hermitians that are also unitary:
\[\sigma_{X}=\begin{bmatrix}0&1\\ 1&0\end{bmatrix},\sigma_{Y}=\begin{bmatrix}0&-i\\ i&0\end{bmatrix},\sigma_{Z}=\begin{bmatrix}1&0\\ 0&-1\end{bmatrix}.\]
The Pauli measurements are tensor products of Pauli matrices, featuring eigenvalues of \(\pm 1\).
ERM of quantum neural network.We focus on quantum neural networks equipped with the mean-square loss. Solving the ERM for a dataset \(\mathcal{S}:=\{(\boldsymbol{\rho}_{j},y_{j})\}_{j=1}^{m}\subseteq(\mathbb{C}^ {d\times d}\times\mathbb{R})^{m}\) involves optimizing the objective function \(\min_{\boldsymbol{\theta}}L(\boldsymbol{\theta}):=\frac{1}{2m}\sum_{j=1}^{m} \big{(}\hat{y}_{j}(\boldsymbol{\theta})-y_{j}\big{)}^{2}\), where \(\hat{y}_{j}(\boldsymbol{\theta})=\text{tr}(\boldsymbol{\rho}_{j}\mathbf{U}^{ \dagger}(\boldsymbol{\theta})\mathbf{M}_{0}\mathbf{U}(\boldsymbol{\theta}))\) for all \(j\in[m]\) with \(\mathbf{M}_{0}\) being the quantum measurement and \(\mathbf{U}(\boldsymbol{\theta})\) being the variational ansatz. Typically, a QNN is trained by optimizing the ERM objective function by gradient descent: at the \(t\)-th iteration, the parameters are updated as \(\boldsymbol{\theta}(t+1)\leftarrow\boldsymbol{\theta}(t)-\eta\nabla L( \boldsymbol{\theta}(t))\), where \(\eta\) is the learning rate; for sufficiently small \(\eta\), the dynamics of gradient descent reduces to that of the gradient flow: \(d\boldsymbol{\theta}(t)/dt=-\eta\nabla L(\boldsymbol{\theta}(t))\). Here we focus on the gradient flow setting following Arora et al. (2019).
Rate of convergenceIn the optimization literature, the rate of convergence describes how fast an iterative algorithm approaches an (approximate) solution. For a general function \(L\) with variables \(\boldsymbol{\theta}\), let \(\boldsymbol{\theta}(t)\) be the solution maintained at the time step \(t\) and \(\boldsymbol{\theta}^{\star}\) be the optimal solution. The algorithm is said to be converging _exponentially fast_ or at a _linear rate_ if \(L(\boldsymbol{\theta}(t))-L(\boldsymbol{\theta}^{\star})\leq\alpha\exp(-ct)\) for some constants \(c\) and \(\alpha\). In contrast, algorithms with the sub-optimal gap \(L(\boldsymbol{\theta}(t))-L(\boldsymbol{\theta}^{\star})\) decreasing slower than exponential are said to be converging with a _sublinear_ rate (e.g. \(L(\boldsymbol{\theta}(t))-L(\boldsymbol{\theta}^{\star})\) decaying with \(t\) as a polynomial of \(1/t\)). We will mainly consider the setting where \(L(\boldsymbol{\theta}^{\star})=0\) (i.e. the _realizable_ case) with continuous time \(t\).
Other notationsWe use \(\left\lVert\cdot\right\rVert_{\text{op}}\), \(\left\lVert\cdot\right\rVert_{F}\) and \(\left\lVert\cdot\right\rVert_{\text{tr}}\) to denote the operator norm (i.e. the largest eigenvalue in terms of the absolute values), Frobenius norm and the trace norm of matrices; we use \(\left\lVert\cdot\right\rVert_{p}\) to denote the \(p\)-norm of vectors, with the subscript omitted for \(p=2\). We use \(\text{tr}(\cdot)\) to denote the trace operation.
## 3 Deviations of QNN Dynamics from NTK
Consider a regression model on an \(m\)-sample training set: for all \(j\in[m]\), let \(y_{j}\) and \(\hat{y}_{j}\) be the label and the model prediction of the \(j\)-th sample. The _residual_ vector \(\mathbf{r}\) is a \(m\)-dimensional vector with \(r_{j}:=y_{j}-\hat{y}_{j}\). The dynamics of the kernel regression is signatured by the first-order linear dynamics of the residual vectors: let \(\mathbf{w}\) be the learned model parameter, and let \(\phi(\cdot)\) be the fixed non-linear map. Recall that the kernel regression minimizes \(L(\mathbf{w})=\frac{1}{2m}\sum_{j=1}^{m}(\mathbf{w}^{T}\phi(\mathbf{x}_{j})-y _{j})^{2}\) for a training set \(\mathcal{S}=\{(\mathbf{x}_{j},y_{j})\}_{j=1}^{m}\), and the gradient with respect to \(\mathbf{w}\) is \(\frac{1}{m}\sum_{j=1}^{m}(\mathbf{w}^{T}\phi(\mathbf{x}_{j})-y_{j})\phi( \mathbf{x}_{j})=-\frac{1}{m}\sum_{j=1}^{m}r_{j}\phi(\mathbf{x}_{j})\). Under the gradient flow with learning rate \(\eta\), the weight \(\mathbf{w}\) updates as \(\frac{d\mathbf{w}}{dt}=\frac{\eta}{m}\sum_{j=1}^{m}r_{j}\phi(\mathbf{x}_{j})\), and the \(i\)-th entry of the
residual vector updates as \(dr_{i}/dt=-\phi(\mathbf{x}_{i})^{T}\frac{d\mathbf{w}}{dt}=-\frac{\eta}{m}\sum_{j= 1}^{m}\phi(\mathbf{x}_{i})^{T}\phi(\mathbf{x}_{j})r_{j}\), or more succinctly \(d\mathbf{r}/dt=-\frac{\eta}{m}\mathbf{K}\mathbf{r}\) with \(\mathbf{K}\) being the kernel/Gram matrix defined as \(K_{ij}=\phi(\mathbf{x}_{i})^{T}\phi(\mathbf{x}_{j})\) (see also Arora et al. (2019)). Notice that the kernel matrix \(\mathbf{K}\) is a constant of time and is independent of the weight \(\mathbf{w}\) or the labels.
Dynamics of residual vectorsWe start by characterizing the dynamics of the residual vectors for the general form of \(p\)-parameter QNNs and highlight the limitation of viewing the over-parameterized QNNs as kernel regressions. Similar to the kernel regression, \(\frac{dr_{j}}{dt}=-\frac{d\hat{y}_{j}}{dt}=-\operatorname{tr}(\boldsymbol{ \rho}_{j}\frac{d}{dt}\mathbf{U}^{\dagger}(\boldsymbol{\theta}(t))\mathbf{M}_ {0}\mathbf{U}(\boldsymbol{\theta}(t)))\) in QNNs. We derive the following dynamics of \(\mathbf{r}\) by tracking the parameterized measurement \(\mathbf{M}(\boldsymbol{\theta})=\mathbf{U}^{\dagger}(\boldsymbol{\theta}) \mathbf{M}_{0}\mathbf{U}(\boldsymbol{\theta})\) as a function of time \(t\).
**Lemma 3.1** (Dynamics of the residual vector).: _Consider a QNN instance with an ansatz \(\mathbf{U}(\boldsymbol{\theta})\) defined as in Line (1), a training dataset \(\mathcal{S}=\{(\boldsymbol{\rho}_{j},y_{j})\}_{j=1}^{m}\), and a measurement \(\mathbf{M}_{0}\). Under the gradient flow for the objective function \(L(\boldsymbol{\theta})=\frac{1}{2m}\sum_{j=1}^{m}\big{(}\operatorname{tr}( \boldsymbol{\rho}_{j}\mathbf{U}^{\dagger}(\boldsymbol{\theta})\mathbf{M}_{0} \mathbf{U}(\boldsymbol{\theta}))-y_{j}\big{)}^{2}\) with learning rate \(\eta\), the residual vector \(\mathbf{r}\) satisfies the differential equation_
\[\frac{d\mathbf{r}(\boldsymbol{\theta}(t))}{dt}=-\frac{\eta}{m}\mathbf{K}( \mathbf{M}(\boldsymbol{\theta}(t)))\mathbf{r}(\boldsymbol{\theta}(t)), \tag{3}\]
_where \(\mathbf{K}\) is a positive semi-definite matrix-valued function of the parameterized measurement. The \((i,j)\)-th element of \(\mathbf{K}\) is defined as_
\[\sum_{l=1}^{p}\big{(}\operatorname{tr}\big{(}\mathrm{i}[\mathbf{M}( \boldsymbol{\theta}(t)),\boldsymbol{\rho}_{i}]\mathbf{H}_{l}\big{)} \operatorname{tr}\big{(}\mathrm{i}[\mathbf{M}(\boldsymbol{\theta}(t)), \boldsymbol{\rho}_{j}]\mathbf{H}_{l}\big{)}\big{)}. \tag{4}\]
_Here \(\mathbf{H}_{l}:=\mathbf{U}_{0}^{\dagger}\mathbf{U}_{1:l-1}^{\dagger}( \boldsymbol{\theta})\mathbf{H}^{(l)}\mathbf{U}_{1:l-1}(\boldsymbol{\theta}) \mathbf{U}_{0}\), is a function of \(\boldsymbol{\theta}\) with \(\mathbf{U}_{1:r}(\boldsymbol{\theta})\) being the shorthand for \(\mathbf{U}_{r}\exp(-i\theta_{r}\mathbf{H}^{(r)})\cdots\mathbf{U}_{1}\exp(-i \theta_{1}\mathbf{H}^{(1)})\)._
While Equation (3) takes a similar form to that of the kernel regression, the matrix \(\mathbf{K}\) is _dependent_ on the parameterized measurement \(\mathbf{M}(\boldsymbol{\theta})\). This is a consequence of the unitarity: consider an alternative parameterization, where the objective function \(\mathbf{L}(\mathbf{M})=\frac{1}{2m}\sum_{j=1}^{m}\big{(}\operatorname{tr}( \boldsymbol{\rho}_{j}\mathbf{M})-y_{j}\big{)}^{2}\) is optimized over all Hermitian matrices \(\mathbf{M}\). It can be easily verified that the corresponding dynamics is exactly the kernel regression with \(K_{ij}=\operatorname{tr}(\boldsymbol{\rho}_{i}\boldsymbol{\rho}_{j})\).
Due to the unitarity of the evolution of quantum states, the spectrum of eigenvalues of the parameterized measurement \(\mathbf{M}(\boldsymbol{\theta})\) is required to remain the same throughout training. In the proof of Lemma 3.1 (deferred to Section A.1 in the appendix), we see that the derivative of \(\mathbf{M}(\boldsymbol{\theta})\) takes the form of a linear combination of commutators \(i[\mathbf{A},\mathbf{M}(\boldsymbol{\theta})]\) for some Hermitian \(\mathbf{A}\). As a result, the traces of the \(k\)-th matrix powers \(\operatorname{tr}(\mathbf{M}^{k}(\boldsymbol{\theta}))\) are constants of time for any integer \(k\), since \(d\operatorname{tr}(\mathbf{M}^{k}(\boldsymbol{\theta}))/dt=k\operatorname{tr}( \mathbf{M}^{k-1}(\boldsymbol{\theta})d\mathbf{M}(\boldsymbol{\theta})/dt)=k \operatorname{tr}(\mathbf{M}^{k-1}(\boldsymbol{\theta})i[\mathbf{A},\mathbf{M}( \boldsymbol{\theta})])=0\) for any Hermitian \(\mathbf{A}\). The spectrum of eigenvalues remains unchanged because the coefficients of the characteristic polynomials of \(\mathbf{M}(\boldsymbol{\theta})\) is completely determined by the traces of matrix powers. On the contrary, the eigenvalues are in general not preserved for \(\mathbf{M}\) evolving under the kernel regression.
Another consequence of the unitarity constraint is that a QNN can not make predictions outside the range of the eigenvalues of \(\mathbf{M}_{0}\), while for the kernel regression with a strictly positive definite kernel, the model can (over-)fit training sets with arbitrary label assignments. Here we further show that the unitarity is pronounced in a typical QNN instance where the predictions are within the range of the measurement.
Sublinear convergence in QNNsOne of the most common choices for designing QNNs is to use a (tensor product of) Pauli matrices as the measurement (see e.g. Farhi et al. (2020); Dunjko and Briegel (2018)). Such a choice features a measurement \(\mathbf{M}_{0}\) with eigenvalues \(\{\pm 1\}\) and trace zero. Here we show that in the setting of supervised learning on pure states with Pauli measurements, the (neural tangent) kernel regression is insufficient to capture the convergence of QNN training. For the kernel regression with a positive definite kernel \(\mathbf{K}\), the objective function \(L\) can be expressed as \(\frac{1}{2m}\sum_{j=1}^{m}(\hat{y}_{j}-y_{j})^{2}=\frac{1}{2m}\mathbf{r}^{T} \mathbf{r}\); under the kernel dynamics of \(\frac{d\mathbf{r}}{dt}=-\frac{\eta}{m}\mathbf{K}\mathbf{r}\), it is easy to verify that \(\frac{d\ln L}{dt}=-\frac{2\eta}{m}\frac{\mathbf{r}^{T}\mathbf{K}\mathbf{r}}{ \mathbf{r}^{T}\mathbf{r}}\leq-\frac{2\eta}{m}\lambda_{\min}(\mathbf{K})\) with \(\lambda_{\min}(\mathbf{K})\) being the smallest eigenvalue of \(\mathbf{K}\). This indicates that \(L\) decays at a linear rate, i.e. \(L(T)\leq L(0)\exp(-\frac{2\eta}{m}\lambda_{\min}(\mathbf{K})T)\). In contrast, we show that the rate of convergence of the QNN dynamics _must_ be sublinear, slower than the linear convergence rate predicted by the kernel regression model with a positive definite kernel.
**Theorem 3.2** (No faster than sublinear convergence).: _Consider a QNN instance with a training set \(\mathcal{S}=\{(\boldsymbol{\rho}_{j},y_{j})\}\) such that \(\boldsymbol{\rho}_{j}\) are pure states and \(y_{j}\in\{\pm 1\}\), and a measurement \(\mathbf{M}_{0}\) with eigenvalues in \(\{\pm 1\}\). Under the gradient flow for the objective function \(L(\boldsymbol{\theta})=\frac{1}{2m}\sum_{j=1}^{m}\operatorname{tr}(\boldsymbol{ \rho}_{j}\mathbf{M}(\boldsymbol{\theta})-y_{j})^{2}\), for any ansatz \(\mathbf{U}(\boldsymbol{\theta})\) defined in Line (1), \(L\) converges to zero at most at a sublinear convergence rate. More concretely, for \(\mathbf{U}(\boldsymbol{\theta})\) generated by \(\{\mathbf{H}^{(l)}\}_{l=1}^{p}\), let \(\eta\) be the learning rate and \(m\) be the sample size, the objective function at time \(t\):_
\[L(\boldsymbol{\theta}(t))\geq 1/(c_{0}+c_{1}t)^{2}. \tag{5}\]
_Here the constant \(c_{0}=1/\sqrt{L(\boldsymbol{\theta}(0))}\) depends on the objective function at initialization, and \(c_{1}=12\eta\sum_{l=1}^{p}\left\|\mathbf{H}^{(l)}\right\|_{\mathsf{op}}^{2}\)._
The constant \(c_{1}\) in the theorem depends on the number of parameters \(p\) through \(\sum_{l=1}^{p}\left\|\mathbf{H}^{(l)}\right\|_{\mathsf{op}}^{2}\) if the operator norm of \(\mathbf{H}^{(l)}\) is a constant of \(p\). We can get rid of the dependency on \(p\) by scaling the learning rate \(\eta\) or changing the time scale, which does not affect the sublinearity of convergence.
By expressing the objective function \(L(\boldsymbol{\theta}(t))\) as \(\frac{1}{2m}\mathbf{r}(\boldsymbol{\theta}(t))^{T}\mathbf{r}(\boldsymbol{ \theta}(t))\), Lemma 3.1 indicates that the decay of \(\frac{dL(\boldsymbol{\theta}(t))}{dt}\) is lower-bounded by \(\frac{-2\eta}{m}\lambda_{\max}(\mathbf{K}(\boldsymbol{\theta}(t)))L( \boldsymbol{\theta}(t))\), where \(\lambda_{\max}(\cdot)\) is the largest eigenvalue of a Hermitian matrix. The full proof of Theorem 3.2 is deferred to Section A.2, and follows from the fact that when the QNN prediction for an input state \(\boldsymbol{\rho}_{j}\) is close to the ground truth \(y_{j}=1\) or \(-1\), the diagonal entry \(K_{jj}(\boldsymbol{\theta}(t))\) vanishes. As a result the largest eigenvalue \(\lambda_{\max}(\mathbf{K}(\boldsymbol{\theta}(t)))\) also vanishes as the objective function \(L(\boldsymbol{\theta}(t))\) approaches \(0\) (which is the global minima). Notice the sublinearity of convergence is independent of the system dimension \(d\), the choices of \(\{\mathbf{H}^{(l)}\}_{l=1}^{p}\) in \(\mathbf{U}(\boldsymbol{\theta})\) or the number of parameters \(p\). This means that the dynamics of QNN training is completely different from kernel regression even in the limit where \(d\) and/or \(p\to\infty\).
Experiments: sublinear QNN convergenceTo support Theorem 3.2, we simulate the training of QNNs using \(\mathbf{M}_{0}\) with eigenvalues \(\pm 1\). For dimension \(d=32\) and \(64\), we randomly sample four \(d\)-dimensional pure states that are orthogonal, with two of samples labeled \(+1\) and the other two labeled \(-1\). The training curves (plotted under the log scale) in Figure 1
flattens as \(L\) approaches \(0\), suggesting the rate of convergence \(-d\ln L/dt\) vanishes around global minima, which is a signature of the sublinear convergence. Note that the sublinearity of convergence is independent of the number of parameters \(p\). For gradient flow or gradient descent with sufficiently small step-size, the scaling of a constant learning rate \(\eta\) leads to a scaling of time \(t\) and does not fundamentally change the (sub)linearity of the convergence. For the purpose of visual comparison, we scale \(\eta\) with \(p\) by choosing the learning rate as \(10^{-3}/p\). For more details on the experiments, please refer to Section D.
## 4 Asymptotic Dynamics of QNNs
As demonstrated in the previous section, the dynamics of the QNN training deviates from the kernel regression for any choices of the number of parameters \(p\) and the dimension \(d\) in the setting of Pauli measurements for classification. This calls for a new characterization of the QNN dynamics in the regime of over-parameterization. For a concrete definition of over-parameterization, we consider the family of the periodic ansatze in Definition 1, and refer to the limit of \(p\to\infty\) with a fixed generating Hamiltonian \(\mathbf{H}\) as the regime of over-parameterization. In this section, we derive the asymptotic dynamics of QNN training when number of parameters \(p\) in the periodic ansatze goes to infinity. We start by decomposing the dynamics of the residual \(\mathbf{r}(\boldsymbol{\theta}(t))\) into a term corresponding to the asymptotic dynamics, and a term of perturbation that vanishes as \(p\to\infty\). As mentioned before, in the context of the gradient flow, the choice of \(\eta\) is merely a scaling of the time and therefore arbitrary. For a QNN instance with \(m\) training samples and a \(p\)-parameter ansatz generated by a Hermitian \(\mathbf{H}\) as defined in Line (2), we choose \(\eta\) to be \(\frac{m}{p}\frac{d^{2}-1}{\operatorname{tr}(\mathbf{H}^{2})}\) to facilitate the presentation:
**Lemma 4.1** (Decomposition of the residual dynamics).: _Let \(\mathcal{S}\) be a training set with \(m\) samples \(\{(\boldsymbol{\rho}_{j},y_{j})\}_{j=1}^{m}\), and let \(\mathbf{U}(\boldsymbol{\theta})\) be a \(p\)-parameter ansatz generated by a non-zero \(\mathbf{H}\) as in Line 2. Consider a QNN instance with a training set \(\mathcal{S}\), ansatz \(\mathbf{U}(\boldsymbol{\theta})\) and a measurement
Figure 1: Sublinear convergence of QNN training. For QNNs with Pauli measurements for a classification task, the (log-scaled) training curves flatten as the number of iterations increases, indicating a sublinear convergence. The flattening of training curves remains for increasing numbers of parameters \(p=10,20,40,80\). The training curves are averaged over \(10\) random initialization, and the error bars are the halves of standard deviations.
\(\mathbf{M}_{0}\). Under the gradient flow with \(\eta=\frac{m}{p}\frac{d^{2}-1}{\operatorname{tr}(\mathbf{H}^{2})}\), the residual vector \(\mathbf{r}(t)\) as a function of time \(t\) through \(\boldsymbol{\theta}(t)\) evolves as_
\[\frac{d\mathbf{r}(t)}{dt}=-(\mathbf{K}_{\mathsf{asym}}(t)+\mathbf{K }_{\mathsf{pert}}(t))\mathbf{r}(t) \tag{6}\]
_where both \(\mathbf{K}_{\mathsf{asym}}\) and \(\mathbf{K}_{\mathsf{pert}}\) are functions of time through the parameterized measurement \(\mathbf{M}(\boldsymbol{\theta}(t))\), such that_
\[(\mathbf{K}_{\mathsf{asym}}(t))_{ij} :=\operatorname{tr}\big{(}i[\mathbf{M}(t),\boldsymbol{\rho}_{i}] \ i[\mathbf{M}(t),\boldsymbol{\rho}_{j}]\big{)}, \tag{7}\] \[(\mathbf{K}_{\mathsf{pert}}(t))_{ij} :=\operatorname{tr}\big{(}i[\mathbf{M}(t),\boldsymbol{\rho}_{i}] \otimes i[\mathbf{M}(t),\boldsymbol{\rho}_{j}]\Delta(t)\big{)}. \tag{8}\]
_Here \(\Delta(t)\) is a \(d^{2}\times d^{2}\) Hermitian as a function of \(t\) through \(\boldsymbol{\theta}(t)\)._
Under the random initialization by sampling \(\{\mathbf{U}_{l}\}_{l=1}^{p}\) i.i.d. from the haar measure over the special unitary group \(SU(d)\), \(\Delta(0)\) concentrates at zero as \(p\) increases. We further show that \(\Delta(t)-\Delta(0)\) has a bounded operator norm decreasing with number of parameters. This allows us to associate the convergence of the over-parameterized QNN with the properties of \(\mathbf{K}_{\mathsf{asym}}(t)\):
**Theorem 4.2** (Linear convergence of QNN with mean-square loss).: _Let \(\mathcal{S}\) be a training set with \(m\) samples \(\{(\boldsymbol{\rho}_{j},y_{j})\}_{j=1}^{m}\), and let \(\mathbf{U}(\boldsymbol{\theta})\) be a \(p\)-parameter ansatz generated by a non-zero \(\mathbf{H}\) as in Line (2). Consider a QNN instance with the training set \(\mathcal{S}\), ansatz \(\mathbf{U}(\boldsymbol{\theta})\) and a measurement \(\mathbf{M}_{0}\), trained by gradient flow with \(\eta=\frac{m}{p}\frac{d^{2}-1}{\operatorname{tr}(\mathbf{H}^{2})}\). Then for sufficiently large number of parameters \(p\), if the smallest eigenvalue of \(\mathbf{K}_{\mathsf{asym}}(t)\) is greater than a constant \(C_{0}\), then with high probability over the random initialization of the periodic ansatz, the loss function converges to zero at a linear rate_
\[L(t)\leq L(0)\exp(-\frac{C_{0}t}{2}). \tag{9}\]
We defer the proof to Section B.2. Similar to \(\mathbf{r}(t)\), the evolution of \(\mathbf{M}(t)\) decomposes into an asymptotic term
\[\frac{d}{dt}\mathbf{M}(t)=\sum_{j=1}^{m}r_{j}[\mathbf{M}(t),[ \mathbf{M}(t),\boldsymbol{\rho}_{j}]] \tag{10}\]
and a perturbative term depending on \(\Delta(t)\). Theorem 4.2 allows us to study the behavior of an over-parameterized QNN by simulating/characterizing the asymptotic dynamics of \(\mathbf{M}(t)\), which is significantly more accessible.
Application: QNN with one training sampleTo demonstrate the proposed asymptotic dynamics as a tool for analyzing over-parameterized QNNs, we study the convergence of the QNN with one training sample \(m=1\). To set a separation from the regime of the sublinear convergence, consider the following setting: let \(\mathbf{M}_{0}\) be a Pauli measurement, for any input state \(\boldsymbol{\rho}\), instead of assigning \(\hat{y}=\operatorname{tr}(\boldsymbol{\rho}\mathbf{U}(\boldsymbol{\theta})^{ \dagger}\mathbf{M}_{0}\mathbf{U}(\boldsymbol{\theta}))\), take \(\gamma\operatorname{tr}(\boldsymbol{\rho}\mathbf{U}(\boldsymbol{\theta})^{ \dagger}\mathbf{M}_{0}\mathbf{U}(\boldsymbol{\theta}))\) as the prediction \(\hat{y}\) at \(\boldsymbol{\theta}\) for a scaling factor \(\gamma>1.0\). The \(\gamma\)-scaling of the measurement outcome can
be viewed as a classical processing in the context of quantum information, or as an activation function (or a link function) in the context of machine learning, and is equivalent to a QNN with measurement \(\gamma\mathbf{M}_{0}\). The following corollary implies the convergence of 1-sample QNN for \(\gamma>1.0\) under a mild initial condition:
**Corollary 4.3**.: _Let \(\boldsymbol{\rho}\) be a \(d\)-dimensional pure state, and let \(y\) be \(\pm 1\). Consider a QNN instance with a Pauli measurement \(\mathbf{M}_{0}\), an one-sample training set \(\mathcal{S}=\{(\boldsymbol{\rho},y)\}\) and an ansatz \(\mathbf{U}(\boldsymbol{\theta})\) defined in Line (2). Assume the scaling factor \(\gamma>1.0\) and \(p\to\infty\) with \(\eta=\frac{d^{2}-1}{p\,\mathrm{tr}(\mathbf{H}^{2})}\). Under the initial condition that the prediction at \(t=0\), \(\hat{y}(0)\) is less than 1, the objective function converges linearly with_
\[L(t)\leq L(0)\exp(-C_{1}t) \tag{11}\]
_with the convergence rate \(C_{1}\geq\gamma^{2}-1\)._
With a scaling factor \(\gamma\) and training set \(\{(\boldsymbol{\rho}_{j},y_{j})\}_{j=1}^{m}\), the objective function, as a function of the parameterized measurement \(\mathbf{M}(t)\), reads as: \(L(\mathbf{M}(t))=\frac{1}{2m}\sum_{j=1}^{m}(\gamma\,\mathrm{tr}(\boldsymbol{ \rho}_{j}\mathbf{M}(t))-y_{j})^{2}\). As stated in Theorem 4.2, for sufficiently large number of parameters \(p\), the convergence rate of the residual \(\mathbf{r}(t)\) is determined by \(\mathbf{K}_{\mathsf{asym}}(t)\), as the asymptotic dynamics of \(\mathbf{r}(t)\) reads as \(\frac{d}{dt}\mathbf{r}=-\mathbf{K}_{\mathsf{asym}}(\mathbf{M}(t))\mathbf{r}(t)\) with the chosen \(\eta\). For \(m=1\), the asymptotic matrix \(\mathbf{K}_{\mathsf{asym}}\) reduces to a scalar \(k(t)=-\,\mathrm{tr}([\gamma\mathbf{M}(t),\boldsymbol{\rho}]^{2})=2(\gamma^{2}- \hat{y}(t)^{2})\). \(\hat{y}(t)\) approaches the label \(y\) if \(k(t)\) is strictly positive, which is guaranteed for \(\hat{y}(t)<\gamma\). Therefore \(|\hat{y}(0)|<1\) implies that \(|\hat{y}(t)|<1\) and \(k(t)\geq 2(\gamma^{2}-1)\) for all \(t>0\).
In Figure 2 (top), we plot the training curves of one-sample QNNs with \(p=320\) and varying \(\gamma=1.2,1.4,2.0,4.0,8.0\) with the same learning rate \(\eta=1e-3/p\). As predicted in Corollary 4.3, the rate of convergence increases with the scaling factor \(\gamma\). The proof of the corollary additionally implies that \(k(t)\) depends on \(\hat{y}(t)\): the convergence rate changes over time as the prediction \(\hat{y}\) changes. Therefore, despite the linear convergence, the dynamics is different from that of kernel regression, where the kernel remains constant during training in the limit \(p\to\infty\).
In Figure 2 (bottom), we plot the empirical rate of convergence \(-\frac{d}{dt}\ln L(t)\) against the rate predicted by \(\hat{y}\). Each data point is calculated for QNNs with different \(\gamma\) at different time steps by differentiating the logarithms of the training curves. The scatter plot displays an approximately linear dependency, indicating the proposed asymptotic dynamics is capable of predicting how the convergence rate changes during training, which is beyond the explanatory power of the kernel regression model. Note that the slope of the linear relation is not exactly one. This is because we choose a learning rate much smaller than \(\eta\) in the corollary statement to simulate the dynamics of gradient flow.
QNNs with one training sample have been considered before (e.g. Liu et al. (2022)), where the linear convergence has been shown under the assumption of "frozen QNTK", namely assuming \(\mathbf{K}\), the time derivative of the log residual remains almost constant throughout training. In the corollary above, we provide an end-to-end proof for the one-sample linear convergence without assuming a frozen \(\mathbf{K}\). In fact, we observe that in our setting \(\mathbf{K}=2(\gamma^{2}-\hat{y}(t))\) changes with \(\hat{y}(t)\) (see also Figure 2) and is therefore not frozen.
Figure 2: (Top) The training curves of one-sample QNNs with varying \(\gamma\). The smallest convergence rate \(-d\ln L/dt\) during training (i.e. the slope of the training curves under the log scale) increases with \(\gamma\). (Bottom) The convergence rate \(-d\ln L/dt|_{t=T}\) as a function of \(2(\gamma^{2}-\hat{y}^{2}(T))\) (jointly scaled by \(1/\gamma^{2}\) for visualization) are evaluated at different time steps \(T\) for different \(\gamma\). The approximately linear dependency shows that the proposed dynamics captures the QNN convergence beyond the explanatory power of the kernel regressions.
QNN convergence for \(m>1\)To characterize the convergence of QNNs with \(m>1\), we seek to empirically study the asymptotic dynamics in Line (10). According to Theorem 4.2, the (linear) rate of convergence is lower-bounded by the smallest eigenvalue of \(\mathbf{K}_{\mathsf{asym}}(t)\), up to an constant scaling. In Figure 3, we simulate the asymptotic dynamics with various combinations of \((\gamma,d,m)\), and evaluate the smallest eigenvalue of \(\mathbf{K}_{\mathsf{asym}}(t)\) throughout the dynamics (Figure 3, details deferred to Section D). For sufficiently large dimension \(d\), the smallest eigenvalue of \(\mathbf{K}_{\mathsf{asym}}\) depends on the ratio between the number of samples and the system dimension \(m/d\) and is proportional to the square of the scaling factor \(\gamma^{2}\).
Empirically, we observe that the smallest convergence rates for training QNNs are obtained near the global minima (See Figure 6 in the appendix), suggesting the bottleneck of convergence occurs when \(L\) is small.
We now give theoretical evidence that, at most of the global minima, the eigenvalues of \(\mathbf{K}_{\mathsf{asym}}\) are lower bounded by \(2\gamma^{2}(1-1/\gamma^{2}-O(m^{2}/d))\), suggesting a linear convergence in the neighborhood of these minima. To make this notion precise, we define the uniform measure over global minima as follows: consider a set of pure input states \(\{\mathbf{\rho}_{j}=\mathbf{v}_{j}\mathbf{v}_{j}^{\dagger}\}_{j=1}^{m}\) that are mutually orthogonal (i.e. \(\mathbf{v}_{i}^{\dagger}\mathbf{v}_{j}=0\) if \(i\neq j\)). For a large dimension \(d\), the global
Figure 3: The smallest eigenvalue of \(\mathbf{K}_{\mathsf{asym}}\) for the asymptotic dynamics with varying system dimension \(d\), scaling factor \(\gamma\) and number of training samples \(m\). For sufficiently large \(d\), the smallest eigenvalue depends on the ratio \(m/d\) and is proportional to the square of the scaling factor \(\gamma^{2}\).
minima of the asymptotic dynamics is achieved when the objective function is \(0\). Let \(\mathbf{u}_{j}(t)\) (resp. \(\mathbf{w}_{j}(t)\)) denote the components of \(\mathbf{v}_{j}\) projected to the positive (resp. negative) subspace of the measurement \(\mathbf{M}(t)\) at the global minima. Recall that for a \(\gamma\)-scaled QNN with a Pauli measurement, the predictions \(\hat{y}(t)=\gamma\operatorname{tr}(\rho_{j}\mathbf{M}(t))=\gamma(\mathbf{u}_{ j}^{\dagger}(t)\mathbf{u}_{j}(t)-\mathbf{w}_{j}^{\dagger}(t)\mathbf{w}_{j}(t))\). At the global minima, we have \(\mathbf{u}_{j}(t)=\frac{1}{2}(1\pm 1/\gamma)\hat{\mathbf{u}}_{j}(t)\) for some unit vector \(\hat{\mathbf{u}}_{j}(t)\) for the \(j\)-th training sample with label \(\pm 1\). On the other hand, given a set of unit vectors \(\{\hat{\mathbf{u}}_{j}\}_{j=1}^{m}\) in the positive subspace, there is a corresponding set of \(\{\mathbf{u}_{j}(t)\}_{j=1}^{m}\) and \(\{\mathbf{w}_{j}(t)\}_{j=1}^{m}\) such that \(L=0\) for sufficiently large \(d\). By uniformly and independently sampling a set of unit vectors \(\{\hat{\mathbf{u}}_{j}\}_{j=1}^{m}\) from the \(d/2\)-dimensional subspace associated with the positive eigenvalues of \(\mathbf{M}(t)\), we induce a uniform distribution over all the global minima. The next theorem characterizes \(\mathbf{K}_{\mathsf{asym}}\) under such an induced uniform distribution over all the global minima:
**Theorem 4.4**.: _Let \(\mathcal{S}=\{(\boldsymbol{\rho}_{j},y_{j})\}_{j=1}^{m}\) be a training set with orthogonal pure states \(\{\boldsymbol{\rho}_{j}\}_{j=1}^{m}\) and equal number of positive and negative labels \(y_{j}\in\{\pm 1\}\). Consider the smallest eigenvalue \(\lambda_{g}\) of \(\mathbf{K}_{\mathsf{asym}}\) at the global minima of the asymptotic dynamics of an over-parameterized QNN with the training set \(\mathcal{S}\), scaling factor \(\gamma\) and system dimension \(d\). With probability \(\geq 1-\delta\) over the uniform measure over all the global minima_
\[\lambda_{g}\geq 2\gamma^{2}(1-\frac{1}{\gamma^{2}}-C_{2}\max\{\frac{m^{2}}{d}, \frac{m}{d}\log\frac{2}{\delta}\}), \tag{12}\]
_which is strictly positive for large \(\gamma>1\) and \(d=\Omega(\mathsf{poly}(m))\). Here \(C_{2}\) is a positive constant._
We defer the proof of Theorem 4.4 to Section C in the appendix. A similar notion of a uniform measure over global minima was also used in Canatar et al. (2021). Notice that the uniformness is dependent on the parameterization of the global minima, and the uniform measure over all the global minima is not necessarily the measure induced by random initialization and gradient-based training. Therefore Theorem 4.4 is not a rigorous depiction of the distribution of convergence rate for a randomly-initialized over-parameterized QNN. Yet the prediction of the theorem aligns well with the empirical observations in Figure 3 and suggests that by scaling the QNN measurements, a faster convergence can be achieved: In Figure 4, we simulate \(p\)-parameter QNNs with dimension \(d=32\) and \(64\) with a scaling factor \(\gamma=4.0\) using the same setup as in Figure 1. The training early stops when the average \(L(t)\) over the random seeds is less than \(1\times 10^{-2}\). In contrast to Figure 1, the convergence rate \(-d\ln L/dt\) does not vanish as \(L\to 0\), suggesting a simple (constant) scaling of the measurement outcome can lead to convergence within much fewer number of iterations.
Another implication of Theorem 4.4 is the deviation of QNN dynamics from any kernel regressions. By straight-forward calculation, the normalized matrix \(\mathbf{K}_{\mathsf{asym}}(0)/\gamma^{2}\) at the random initialization is independent of the choices of \(\gamma\). In contrast, the typical value of \(\lambda_{g}/\gamma^{2}\) in Theorem 4.4 is dependent on \(\gamma^{2}\), suggesting non-negligible changes in the matrix \(\mathbf{K}_{\mathsf{asym}}(t)\) governing the dynamics of \(\mathbf{r}\) for finite scaling factors \(\gamma\). Such phenomenon is empirically verified in Figure 5 in the appendix.
## 5 Limitations and Outlook
In the setting of \(m>1\), the proof of the linear convergence of QNN training (Section 4) relies on the convergence of the asymptotic QNN dynamics as a premise. Given our empirical
results, an interesting future direction might be to rigorously characterize the condition for the convergence of the asymptotic dynamics. Also we mainly consider (variants of) two-outcome measurements \(\mathbf{M}_{0}\) with two eigensubspaces. It might be interesting to look into measurements with more complicated spectrums and see how the shapes of the spectrums affect the rates of convergence.
A QNN for learning a classical dataset is composed of three parts: a classical-to-quantum encoder, a quantum classifier and a readout measurement. Here we have mainly focused on the stage after encoding, i.e. training a QNN classifier to manipulate the density matrices containing classical information that are potentially too costly for a classically-implemented linear model. Our analysis highlights the necessity for measurement design, assuming the design of the quantum classifier mixes to the full \(d\times d\) special unitary group. Our result can be combined with existing techniques of classifier designs (i.e. ansatz design) (Ragone et al. (2022); Larocca et al. (2021); Wang et al. (2022); You et al. (2022)) by engineering the invariant subspaces, or be combined with encoder designs explored in (Huang et al., 2021; Du et al., 2022).
## Acknowledgements
We thank E. Anschuetz, B. T. Kiani, J. Liu and anonymous reviewers for useful comments. This work received support from the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Accelerated Research in Quantum Computing and Quantum Algorithms Team programs, as well as the U.S. National Science Foundation grant CCF-1816695, and CCF-1942837 (CAREER).
Figure 4: Training curves of QNNs with \(\gamma=4.0\) for learning a 4-sample dataset with labels \(\pm 1\). For \(p=10,20,40,80\), the rate of convergence is greater than 0 as \(L\to 0\), and it takes less than 1000 iterations for \(L\) in most of the instances to convergence below \(1\times 10^{-2}\). In contrast, in Figure 1, \(L>1\times 10^{-1}\) after 10000 iterations despite the increasing number of parameters.
## Disclaimer
This paper was prepared with synthetic data and for informational purposes by the teams of researchers from the various institutions identified above, including the Global Technology Applied Research Center of JPMorgan Chase & Co. This paper is not a product of the JPMorgan Chase Institute. Neither JPMorgan Chase & Co. nor any of its affiliates make any explicit or implied representation or warranty and none of them accept any liability in connection with this paper, including, but limited to, the completeness, accuracy, reliability of information contained herein and the potential legal, compliance, tax or accounting effects thereof. This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction.
|
2310.11804 | Physics-informed neural network for acoustic resonance analysis in a
one-dimensional acoustic tube | This study devised a physics-informed neural network (PINN) framework to
solve the wave equation for acoustic resonance analysis. The proposed
analytical model, ResoNet, minimizes the loss function for periodic solutions
and conventional PINN loss functions, thereby effectively using the function
approximation capability of neural networks while performing resonance
analysis. Additionally, it can be easily applied to inverse problems. The
resonance in a one-dimensional acoustic tube, and the effectiveness of the
proposed method was validated through the forward and inverse analyses of the
wave equation with energy-loss terms. In the forward analysis, the
applicability of PINN to the resonance problem was evaluated via comparison
with the finite-difference method. The inverse analysis, which included
identifying the energy loss term in the wave equation and design optimization
of the acoustic tube, was performed with good accuracy. | Kazuya Yokota, Takahiko Kurahashi, Masajiro Abe | 2023-10-18T08:52:10Z | http://arxiv.org/abs/2310.11804v4 | # Physics-informed Neural Network for Acoustic Resonance Analysis
###### Abstract
This study proposes the physics-informed neural network (PINN) framework to solve the wave equation for acoustic resonance analysis. ResoNet, the analytical model proposed in this study, minimizes the loss function for periodic solutions, in addition to conventional PINN loss functions, thereby effectively using the function approximation capability of neural networks, while performing resonance analysis. Additionally, it can be easily applied to inverse problems. Herein, the resonance in a one-dimensional acoustic tube was analyzed. The effectiveness of the proposed method was validated through the forward and inverse analyses of the wave equation with energy-loss terms. In the forward analysis, the applicability of PINN to the resonance problem was evaluated by comparison with the finite-difference method. The inverse analysis, which included the identification of the energy loss term in the wave equation and design optimization of the acoustic tube, was performed with good accuracy.
[The following article has been submitted to the Journal of the Acoustical Society of America. After it is published, it will be found at [https://pubs.aip.org/asa/jasa](https://pubs.aip.org/asa/jasa).]
+
Footnote †: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding: author: author: Corresponding author: Corresponding author: author: Corresponding: author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding: author: Corresponding author: author: Corresponding: author: Corresponding author: author: Corresponding author: Corresponding: author: author: Corresponding author: Corresponding author: author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: author: Corresponding author: author: Corresponding: author: Corresponding author: Corresponding author: author: Corresponding: author: Corresponding author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding author: author: Corresponding: author: Corresponding author: Corresponding: author: author: Corresponding author: author: Corresponding: author: Corresponding: author: Corresponding author: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding author: author: Corresponding author: author: Corresponding: author: Corresponding author: Corresponding: author: author: Corresponding: author: author: Corresponding author: Corresponding: author: author: Corresponding: author: Corresponding author: author: Corresponding author: Corresponding: author: author: Corresponding: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: author: Corresponding author: Corresponding: author: author: Corresponding author: author: Corresponding: author: Corresponding: author: author: Corresponding: author: Corresponding: author: Corresponding author: author: Corresponding: author: Corresponding: author: Corresponding author: author: Corresponding: author: author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding author: author: Corresponding: author: Corresponding author: author: Corresponding: author: Corresponding: author: Corresponding author: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding: Corresponding author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author:: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding:: author: Corresponding: Corresponding: author:: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author:: Corresponding: author: Corresponding:: Corresponding: author: Corresponding:: author: Corresponding: Corresponding: author:: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author:: Corresponding: Corresponding: author: Corresponding:: Corresponding: author: Corresponding: Corresponding: author:: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author:: Corresponding: author:: Corresponding: author: Corresponding: Corresponding: author:: Corresponding:: author:: Corresponding: Corresponding: author:: Corresponding: author:: Corresponding: Corresponding: author:: Corresponding: Corresponding: author:: Corresponding: author:: Corresponding: Corresponding: author:: Corresponding: author:: Corresponding: Corresponding: author:: Corresponding: Corresponding: author:: Corresponding:: author:: Corresponding:: Corresponding: author::: Corresponding: author:: Corresponding: author:: Corresponding:: Corresponding: author:: Corresponding: author:: Corresponding: Corresponding: author:: Corresponding:: author:: Corresponding: Corresponding: author:: Corresponding:: Corresponding: author:: Corresponding:: author:: Corresponding:: author:: Corresponding:: author:: Corresponding: author::: Corresponding: author::: Corresponding: author::: Corresponding: author::: Corresponding:: author::: Corresponding: author::: Corresponding: author::: Corresponding: author::: Corresponding: author::: Corresponding: author::: Corresponding: author::: Corresponding: author:::: Corresponding: author::: Corresponding: author::: Corresponding:: author::: Corresponding:: author::: Corresponding: author:::: Corresponding: author::: Corresponding:: author::: Corresponding:: author::: Corresponding: author:::: Corresponding: author:::: Corresponding: author:::: Corresponding: author::: Corresponding: author:::: Corresponding: author:::: Corresponding: author::::: Corresponding: author:::: Corresponding: author:::: Corresponding: author::::: Corresponding: author:
acoustics fields, including the analysis of speech production and design optimization of acoustic equipment.
In this study, we propose ResoNet, a PINN that analyzes acoustic resonance in the time domain based on the wave equation while effectively utilizing the function approximation capability of neural networks [32] by training the neural network to minimize the loss function with respect to periodic solutions. The main contributions of this study are as follows. (i) This study proposes a novel framework for analyzing resonances in the time domain using PINN; (ii) it presents a detailed investigation of the applicability of PINN to acoustic resonance analysis, and (iii) it presents an investigation of the performance of inverse problem analysis on acoustic resonance phenomena.
The remainder of this paper is organized as follows. Section II describes the one-dimensional wave equation with energy loss terms, and the acoustic field setup analyzed in this study. Section III describes ResoNet, which is a PINN that analyzes resonances based on the wave equation described in Section II. Section IV describes the forward and inverse analyses using ResoNet and its performance. Section V summarizes the study and discusses the application potential of PINN in acoustic resonance.
## II Governing equations of acoustic resonance
This section describes the wave equation and the acoustic field setup analyzed in this study.
### One-dimensional wave equation with energy loss terms
We considered the propagation of plane sound waves in an acoustic tube of length \(l\) and circular cross-sectional area of \(A(x)\), as shown in Fig. 1, with \(x\) as the axial direction. Let the sound pressure in the acoustic tube be \(p\) and air volume velocity be \(u\). Assuming \(p=Pe^{j\omega t}\) and \(u=Ue^{j\omega t}\), the telegrapher's equations for the acoustic tube considering energy loss are as follows [33].
\[\frac{dU}{dx} = -\left(G+j\omega\frac{A}{K}\right)P, \tag{1}\] \[\frac{dP}{dx} = -\left(R+j\omega\frac{\rho}{A}\right)U, \tag{2}\]
where \(G\) is the coefficient of energy loss owing to thermal conduction at the tube wall; \(R\) is the coefficient of energy loss owing to viscous friction at the tube wall; \(j\) is the imaginary unit; \(\omega\) is the angular velocity; \(K\) is the bulk modulus; and \(\rho\) is the air density. Equations (1) and (2) can be expressed in the time domain, as follows.
\[\frac{\partial u}{\partial x} = -Gp-\frac{A}{K}\frac{\partial p}{\partial t}, \tag{3}\] \[\frac{\partial p}{\partial x} = -Ru-\frac{\rho}{A}\frac{\partial u}{\partial t}. \tag{4}\]
The velocity potential \(\phi\) is defined as:
\[u = -A\frac{\partial\phi}{\partial x}, \tag{5}\] \[p = RA\phi+\rho\frac{\partial\phi}{\partial t}. \tag{6}\]
From equations (3)-(6), we obtain the following wave equation with energy loss terms, as follows.
\[\frac{\partial^{2}\phi}{\partial x^{2}}+\frac{1}{A}\frac{\partial A}{ \partial x}\frac{\partial\phi}{\partial x}=GR\phi+\left(\frac{G\rho}{A}+\frac {RA}{K}\right)\frac{\partial\phi}{\partial t}+\frac{\rho}{K}\frac{\partial^{2 }\phi}{\partial t^{2}}. \tag{7}\]
Detailed derivation of Eq. (7) is provided in Appendix A.
Theoretical solutions for \(R\) and \(G\) have been proposed, as follows, under the assumptions that the wall surface is rigid and thermal conductivity is infinite [33].
\[R = \frac{S}{A^{2}}\sqrt{\frac{\omega_{c}\rho\mu}{2}}, \tag{8}\] \[G = S\frac{\eta-1}{\rho c^{2}}\sqrt{\frac{\lambda\omega_{c}}{2c_{p} \rho}}, \tag{9}\]
where \(S\) is the circumference of the acoustic tube; \(\mu\), \(\eta\), \(c\), \(\lambda\), and \(c_{p}\) are the viscosity coefficient, heat-capacity ratio, speed of sound, thermal conductivity, and specific heat at constant pressure, respectively; and \(\omega_{c}\) is the angular velocity used to calculate the energy loss term. As \(R\) and \(G\) vary with the physical properties of air and the condition of the walls of the acoustic tube, these parameters are generally unknown. We confirmed that the energy loss parameters of the acoustic tube should be measured experimentally [34]. Additionally, as demonstrated in the field of vocal tract analysis, we have experimentally confirmed that the energy loss parameters can be approximated as frequency-independent constants in a limited frequency range [34]. Therefore, in this study, \(\omega_{c}\) was set as constant, and consequently, \(R\) and \(G\) were assumed to be frequency-independent constants.
### Acoustic field and boundary conditions
The acoustic field analyzed in this study is shown in light blue in Fig. 1. The acoustic tube was straight, and
Figure 1: (Color online) Acoustic tube.
a forced-flow boundary condition was given at \(x=0\). It had an open end with an infinite planar baffle at \(x=l\), and the boundary condition was given by the equivalent circuit described in Section II.3.
The above boundary conditions are fundamental in human speech production [2425] and in the acoustic analysis of brass instruments [35]. In this study, we used these boundary conditions to investigate the performance of resonance analysis with PINN.
### Modeling of radiation
This section describes the modeling of the radiation at \(x=l\) in Fig. 1. Assuming that the particle velocity \(u_{l}\) at the open end is uniform, air at the open end can be regarded as a planar sound source. In response to the acoustic radiation, the plane receives sound pressure \(p_{l}\) from outside the acoustic tube. If the open end is surrounded by an infinite planar baffle, the relationship between \(u_{l}\) and \(p_{l}\) can be approximated using the equivalent circuit shown in Fig. 2[24]. In Fig. 2, the volume velocity \(u_{l}\) corresponds to the current and the sound pressure \(p_{l}\) to the voltage. The equations connecting \(u_{l}\) and \(p_{l}\) are as follows.
\[\left(u_{l}-u_{r}\right)R_{r} = L_{r}\frac{du_{r}}{dt}, \tag{10}\] \[p_{l} = \left(u_{l}-u_{r}\right)R_{r}, \tag{11}\]
where \(R_{r}\) is the circuit resistance; \(L_{r}\) is the circuit reactance; and \(u_{l}\) is the current (corresponding to the volume velocity) flowing through the coil side. Here, \(R_{r}\) and \(L_{r}\) are expressed as follows.
\[R_{r} = \frac{128\rho c}{9\pi^{2}A_{r}}, \tag{12}\] \[L_{r} = \frac{8\rho}{3\pi\sqrt{\pi A_{r}}}, \tag{13}\]
where \(A_{l}\) denotes the cross-sectional area for \(x=l\).
## III Proposed method
This section describes the structure of ResoNet, a PINN for analyzing the acoustic resonance, and a training method for the neural network.
### Overview of ResoNet
The proposed neural network structure for the resonance analysis in acoustic tubes is shown in Fig. 3; we refer to this architecture as ResoNet. As shown in Fig. 3, ResoNet has two blocks of neural networks.
The first is a network that calculates the solutions to the wave equation in Eq. (7), taking \(x_{i}\) and \(t_{i}\) as inputs (where \(i\) is the sample number) to predict the velocity potential \(\hat{\phi}_{i}\). In this study, this input-output relationship is expressed in the following equation.
\[\hat{\phi}_{i}=F_{w}\left(x_{i},t_{i};\Theta_{w}\right), \tag{14}\]
where \(F_{w}\) is the operator of the neural network for the wave equation and \(\Theta_{w}\) is the set of trainable parameters of the neural network.
The other is a network for calculating the acoustic radiation, utilizing \(t_{i}\) as the input for predicting \(\hat{u}_{ri}\) by calculating the solution of the equivalent circuit in Fig. 2. The input-output relationship is expressed as:
\[\hat{u}_{ri}=F_{r}\left(t_{i};\Theta_{r}\right), \tag{15}\]
where \(F_{r}\) is the operator of the neural network for acoustic radiation analysis and \(\Theta_{r}\) is the set of trainable parameters of the neural network.
Automatic differentiation is performed on \(\hat{\phi}_{i}\) and \(\hat{u}_{ri}\), and the various loss functions shown on the right side of Fig. 3 are defined. The traditional PINN uses the PDE and boundary condition (BC) loss functions. The PDE loss introduces the constraints of Eq. (7), and the BC loss introduces constraints owing to the boundary conditions into the neural network. We used periodicity loss and coupling loss as the loss functions for ResoNet, where periodicity loss introduced constraints owing to the resonance periodicity, and coupling loss introduced constraints owing to the coupling of the wave equation and the external system (acoustic radiation in this study) into the neural network. Each item is described in detail in the following sections.
### Neural network blocks
As shown in Fig. 3, ResoNet has two neural network blocks: one calculates the solution of the wave equation in Eq. (7), whereas the other calculates the solution for the acoustic radiation circuit in Fig. 2.
## IV Neural network for wave equation
This network uses \(x_{i}\) and \(t_{i}\) as inputs, and predicts the velocity potential \(\hat{\phi}_{i}\) for the wave equation. Initially, two-channel data (\(x_{i}\), \(t_{i}\)) are fed to "Input FC layer A" which is a fully connected layer [36] that outputs \(N_{f}\) channel data as shown in Fig. 3. An activation layer is present immediately afterward, and it is given by:
\[f(a)=a+\sin^{2}a, \tag{16}\]
where \(a\) denotes the input to layer. This activation function is called a Snake, and has been reported to be robust to periodic inputs [37].
Figure 2: Equivalent circuit of acoustic radiation.
After the first activation layer, data is fed into the "FC block." All FC blocks in Fig. 3 have the same structure, and its details are shown in Fig. 4. The fully connected layer in Fig. 4 has \(N_{f}\) input channels and \(N_{f}\) output channels, and the activation layer is the Snake, as shown in Eq. (16). To circumvent the vanishing gradient problem[38], a residual connection[39] is applied before the last activation layer.
After \(N_{b}\) FC blocks, the data is fed into the "Output FC layer" which is a fully connected layer with \(N_{f}\) input channels and one output channel that predicts the \(\hat{\phi}_{i}\) value of the solution for the wave equation.
## 2 Neural network for acoustic radiation analysis
This network uses \(t_{i}\) as the input, and predicts the \(\hat{u}_{ri}\) value of the equivalent circuit shown in Fig. 2. The only difference from the neural network for the wave equation (described in Section III.2.1) is that the number of input channels is one (\(t_{i}\) only) and the "Input FC layer B" is a fully connected layer with one input channel and \(N_{f}\) output channels. All the other structures are identical to those described in Section III.2.1.
### Loss functions
The loss function of ResoNet is calculated as the sum of the following partial loss functions: traditional PINN losses (PDE loss and BC loss), periodicity loss, and coupling loss.
## 1 Traditional PINN losses
For PDE loss, the output \(\hat{\phi}_{i}\) of the neural network is defined as:
\[\hat{\phi}_{i,E}:=F_{w}\left(x_{i},t_{i};\Theta_{w}\right),\quad x_{i}\in \left[0,l\right],\quad t_{i}\in\left[0,T\right], \tag{17}\]
where \(T\) is the simulation time ( one resonance period in this study). For \(\hat{\phi}_{i,E}\) to follow Eq. (7), the PDE loss is defined as:
\[L_{E} =\frac{1}{N_{E}}\sum_{i=1}^{N_{E}}\left\{\frac{\partial^{2}\hat{ \phi}_{i,E}}{\partial x_{i}^{2}}+\frac{1}{A_{i}}\frac{\partial A_{i}}{ \partial x_{i}}\frac{\partial\hat{\phi}_{i,E}}{\partial x_{i}}-G_{i}R_{i} \hat{\phi}_{i,E}\right.\] \[\quad\left.-\left(\frac{G_{i}\rho}{A_{i}}+\frac{R_{i}A_{i}}{K} \right)\frac{\partial\hat{\phi}_{i,E}}{\partial t_{i}}-\frac{\rho}{K}\frac{ \partial^{2}\hat{\phi}_{i,E}}{\partial t_{i}^{2}}\right\}^{2}, \tag{18}\]
where \(N_{E}\) is the number of collocation points for the PDE loss. The partial differential values of \(\hat{\phi}_{i,E}\) required to calculate Eqs. (18) are obtained via automatic differentiation[38] of the neural network.
Figure 4: (Color online) Details of FC block.
Figure 3: (Color online) Structure of ResoNet.
Similarly, \(\hat{\phi}_{i}\) must follow boundary conditions. As described in Section II.2, the boundary condition in this study at \(x=0\) is given by the forced flow velocity. For the BC loss, the output \(\hat{\phi}_{i}\) of the neural network is defined as:
\[\hat{\phi}_{i,B}:=F_{w}\left(x_{0},t_{i};\Theta_{w}\right),\quad t_{i}\in\left[ 0,T\right], \tag{19}\]
where \(x_{0}=0\). The loss function \(L_{B}\) with respect to the boundary condition is defined as:
\[L_{B}=\frac{1}{N_{B}}\sum_{i=1}^{N_{B}}\left(\hat{u}_{i,B}-\bar{u}_{i,B}\right) ^{2}, \tag{20}\]
where \(N_{B}\) is the number of collocation points for the BC loss and \(\bar{u}_{i,B}\) is the volume flow velocity data given as the boundary condition. Based on Eq. (5), \(\hat{u}_{i,B}\) is calculated from \(\hat{\phi}_{i,B}\) using the following equation.
\[\hat{u}_{i}=-A_{i}\frac{\partial\hat{\phi}_{i}}{\partial x_{i}}. \tag{21}\]
## 2 Periodicity loss
This section explains the core idea of ResoNet, the loss function with respect to periodicity. When performing time-domain resonance analysis, the transient state must be included until a steady state is reached. As mentioned in Section I, because sound waves have complex dynamics, a large-scale neural network is required to perform long simulations, which poses challenges in terms of the computational cost and neural network convergence.
In the acoustic resonance analysis, the object of interest is one period in the steady state, as shown in Fig. 5. Therefore, in ResoNet, only one period was analyzed and the function approximation capability of the neural network was effectively used. For this purpose, we proposed a "periodicity loss," which forces the output of the neural network to have the same value as \(\hat{\phi}_{i}\) at \(t=0\) and \(t=T\) in the steady state. The following describes the procedure for calculating periodicity loss.
First, the output \(\hat{\phi}_{i}\) of the neural network for the wave equation is defined using the following two equations.
\[\hat{\phi}_{i,P1} :=F_{w}\left(x_{i},t_{0};\Theta_{w}\right),\] \[\hat{\phi}_{i,P2} :=F_{w}\left(x_{i},t_{T};\Theta_{w}\right),\quad x_{i}\in\left[0, l\right], \tag{22}\]
where \(t_{0}=0\) and \(t_{T}=T\) (\(T\): period); and \(\phi_{i,P1}\) and \(\phi_{i,P2}\) are the values of the output \(\hat{\phi}_{i}\) for times \(t=0\) and \(t=T\) at the same position \(x_{i}\), respectively.
Based on Eq. (6), the sound pressure is obtained from \(\hat{\phi}_{i}\) as follows.
\[\hat{p}_{i}=R_{i}A_{i}\hat{\phi}_{i}+\rho\frac{\partial\hat{\phi}_{i}}{ \partial t_{i}}. \tag{23}\]
The volume velocity is obtained from \(\hat{\phi}_{i}\) using Eq. (21). Let \(\hat{u}_{i,P1}\) and \(\hat{p}_{i,P1}\) be the volume velocity and sound pressure obtained from \(\hat{\phi}_{i,P1}\), and let \(\hat{u}_{i,P2}\) and \(\hat{p}_{i,P2}\) be those obtained from \(\hat{\phi}_{i,P2}\), respectively. In the steady state, if the input \(x_{i}\) (position) is the same, the continuity condition must hold between \(\hat{p}_{i,P1}\) and \(\hat{p}_{i,P2}\) and between \(\hat{u}_{i,P1}\) and \(\hat{u}_{i,P2}\). Therefore, the following loss functions are defined to force the neural network to enforce the conditions \(\hat{p}_{i,P1}=\hat{p}_{i,P2}\) and \(\hat{u}_{i,P1}=\hat{u}_{i,P2}\).
\[L_{u} =\frac{1}{N_{P}}\sum_{i=1}^{N_{P}}\left(\hat{u}_{i,P1}-\hat{u}_{i,P2}\right)^{2} \tag{24}\] \[L_{p} =\frac{1}{N_{P}}\sum_{i=1}^{N_{P}}\left(\hat{p}_{i,P1}-\hat{p}_{i,P2}\right)^{2}, \tag{25}\]
where \(N_{P}\) is the number of collocation points for the periodicity loss. Additionally, we define the following loss function to enforce the continuous condition of the time derivative.
\[L_{t}=\frac{1}{N_{P}}\sum_{i=1}^{N_{P}}\left(\frac{\partial^{2}\hat{\phi}_{i,P 1}}{\partial t_{0}^{2}}-\frac{\partial^{2}\hat{\phi}_{i,P2}}{\partial t_{T}^{2 }}\right)^{2}. \tag{26}\]
From Eqs. (24)-(26), the periodicity loss \(L_{P}\) proposed in this study is defined as
\[L_{P}=\lambda_{u}L_{u}+\lambda_{p}L_{p}+\lambda_{t}L_{t}, \tag{27}\]
where \(\lambda_{u}\), \(\lambda_{p}\) and \(\lambda_{t}\) are the weight parameters for each term.
## 3 Coupling loss
In ResoNet, the coupling condition between the system described by the wave equation and external system is introduced as the coupling loss. In this study, the external system is an acoustic radiation system, as shown in Fig. 2, and the Eqs. (10) and (11) describe coupling. In this section, we describe a method for calculating the coupling loss.
First, we define the output \(\hat{\phi}_{i}\) of the neural network for the wave equation as:
\[\hat{\phi}_{i,C}:=F_{w}\left(x_{l},t_{i};\Theta_{w}\right),\qquad t_{i}\in \left[0,T\right], \tag{28}\]
where \(x_{l}=l\). Let \(\hat{u}_{i,C}\) and \(\hat{p}_{i,C}\) be the volume velocity and sound pressure at \(x=l\), obtained by applying Eqs. (21) and (23) to \(\hat{\phi}_{i,C}\), respectively. Additionally, as defined in Eq. (15), let \(\hat{u}_{ri}\) be the output of the neural
Figure 5: (Color online) Sound pressure waveform at steady state of resonance.
network for acoustic radiation for input \(t_{i}\). Based on Eqs. (10) and (11), the coupling loss \(L_{C}\) is defined as:
\[\begin{split} L_{C}&=\frac{\lambda_{l}}{N_{C}}\sum_{i= 1}^{N_{C}}\left\{\left(\hat{u}_{ri}-\hat{u}_{i,C}\right)R_{r}-L_{r}\frac{d\hat{u} _{ri}}{dt_{i}}\right\}^{2}\\ &\quad+\frac{\lambda_{r}}{N_{C}}\sum_{i=1}^{N_{C}}\left\{\hat{p}_ {i,C}-\left(\hat{u}_{ri}-\hat{u}_{i,C}\right)R_{r}\right\}^{2},\end{split} \tag{29}\]
where \(N_{C}\) is the number of collocation points of the coupling loss; and \(\lambda_{l}\) and \(\lambda_{r}\) are the weight parameters of each term.
## 4 Loss function for the whole network
The loss function \(L_{all}\) for the entire network was calculated as the sum of the traditional PINN losses, periodic loss, and coupling loss, as follows.
\[L_{all}=\lambda_{E}L_{E}+\lambda_{B}L_{B}+\lambda_{P}L_{P}+\lambda_{C}L_{C}, \tag{30}\]
where \(\lambda_{E}\), \(\lambda_{B}\), \(\lambda_{P}\), and \(\lambda_{C}\) denote the weight parameters of the respective loss functions. Finally, the optimization problem of ResoNet was formulated as:
\[\min_{\Theta_{w},\Theta_{r}}L_{all}(\Theta_{w},\Theta_{r}). \tag{31}\]
By minimizing the loss function \(L_{all}\), the trainable parameters \(\Theta_{w}\) and \(\Theta_{r}\) of the neural network were optimized. For this purpose, we used the Adam optimizer[40] to determine the optimal values of \(\Theta_{w}\) and \(\Theta_{r}\) through iterative calculations.
### Implementation
We implemented ResoNet using the Deep Learning Toolbox in MATLAB (MathWorks, USA) and used the "dlfeval" function to code a custom training loop. We used "sobolset" function from the Statistics and Machine Learning Toolbox to create datasets for \(x_{i}\) and \(t_{i}\). The neural network was trained via GPU-assisted computation using the Parallel Computing Toolbox.
The neural network training and prediction were performed on a computer equipped with a Core i9-13900KS CPU (Intel, USA) and GeForce RTX 4090 GPU (NVIDIA, USA) with 128 GB of main memory and 24 GB of video memory.
## IV Validation of proposed method
The performance of the proposed method was validated through forward and inverse analysis of the acoustic resonance using ResoNet.
### Forward analysis
**1. Analysis conditions for forward analysis**
The effectiveness of the proposed method was assessed through a forward analysis of the acoustic tube, as shown in Fig. 1 using the boundary conditions described in Section II.2. The length of the acoustic tube, \(l\), was set to 1 m, and the diameter was set to 10 mm.
Note that the forced-flow velocity waveform, given as a boundary condition, is a smoothed Rosenberg wave[41] as shown in Fig. 6. To smooth the waveform, similar to an R++ wave[42], a moving average filter was applied to the original Rosenberg wave. The fundamental frequency \(F_{0}\) of the forced flow waveform was 261.6 Hz (C4 on the musical scale). Therefore, \(T=3.82\times 10^{-3}\) s.
The physical properties of air used in the analysis are listed in Table 1. For the energy loss coefficients, Ishizaka et al. calculated \(R\) by substituting a constant for \(\omega_{c}\) in Eqs. (8)[24]. In this study, we calculated \(R\) and \(G\) by substituting \(\omega_{c}=1643.7\) rad/s (261.6 Hz, C4 in the musical scale) in Eqs. (8) and (9), respectively; thus, we obtained \(R=6.99\times 10^{5}\) m\({}^{2}\)/(Pa \(\cdot\) s) and \(G=3.65\times 10^{-7}\) Pa \(\cdot\) s/m\({}^{4}\).
In the forward analysis, the number of nodes in the neural network \(N_{f}\) was set to 200 and the number of FC blocks \(N_{b}\) was set to five. Further, a dataset (\(x_{i}\), \(t_{i}\)) for each loss function was created, as follows. In Eq. (17) for the PDE loss calculation, we used quasi-random numbers generated by the "sobolset" function in MATLAB for \(x_{i}\) and \(t_{i}\), and the number of collocation points \(N_{E}\) was
\begin{table}
\begin{tabular}{c c} Parameter & Value \\ \hline Air density \(\rho\) & 1.20 kg/m\({}^{3}\) \\ Bulk modulus \(K\) & \(1.39\times 10^{5}\) Pa \\ Speed of sound \(c\) & 340 m/s \\ Viscosity coefficient \(\mu\) & 19.0\(\times 10^{-6}\) Pa \(\cdot\) s \\ Heat capacity ratio \(\eta\) & 1.40 \\ Thermal conductivity \(\lambda\) & 2.41\(\times 10^{-2}\) W/(m \(\cdot\) K) \\ Specific heat for const. pressure \(c_{p}\) & 1.01 kJ/(kg \(\cdot\) K) \\ \end{tabular}
\end{table}
Table 1: Physical properties of air.
Figure 6: (Color online) Forced flow velocity waveform (smoothed Rosenberg waveform).
5000. Next, for \(t_{i}\) in Eq. (19), and Eq. (28) used in the calculation of the BC and coupling losses, the range \([0,T]\) was divided into 1000 equal parts to create a dataset of \(t_{i}\); thus, \(N_{B}\) and \(N_{C}\) were 1000. Finally, for \(x_{i}\) in Eqs. (22) used in the calculation of the periodicity loss, the range \([0,l]\) was divided into 1000 equal parts to create a dataset of \(x_{i}\); thus, \(N_{P}\) was 1000. As in Rasht-Behesht et al.[20], the value ranges of \(x_{i}\) and \(t_{i}\) were normalized to \([-1,1]\) when inputting them into the neural network.
## 2 Results of forward analysis
Figure 7(a) shows the analyzed sound pressure in the range of \(x=[0,l]\) and \(t=[0,T]\) after 20,000 training epochs. Although only one period was analyzed, the same result is shown twice along the time axis to confirm the continuity of the waveform at \(t=0\) and \(t=T\). The results of the finite difference method (FDM) is shown for comparison in Fig. 7(b). Figure 7 demonstrates strong agreement between the ResoNet and FDM analysis results. Figure 8 shows the differences between the results of ResoNet and FDM. The difference was less than 1% in most regions; however, some streaked regions with a difference of 2% or more were noted. The regions with large differences are discussed later in this section.
Figure 9 shows the analyzed sound pressure waveforms at \(x=l\) and Fig. 10 shows the frequency spectra. Although differences are observed in the high-frequency domain in Fig. 10, the ResoNet results in the time domain indicate its high accuracy in acoustic resonance analysis.
The regions with large differences in Fig. 8 were discussed considering the sound pressure waveform and frequency spectra. Figure 11 shows the difference and sound pressure waveforms for one period on the same timescale; evidently, the difference is particularly large in region A. The corresponding region in the sound pressure waveform shows a difference between the FDM and ResoNet
Figure 8: (Color online) Error of the ResoNet from the FDM.
Figure 10: (Color online) Frequency spectra of the waveforms of Fig. 9.
Figure 7: (Color online) Analyzed sound pressure.
Figure 9: (Color online) Sound pressure waveforms at \(x=l\).
waveforms at the points indicated by B and C. At point B, the waveform suddenly changes from monotonically decreasing to monotonically increasing, and at point C, it shifts from monotonically increasing to a horizontal phase. Considering that a neural network is a function approximator, such steep waveform changes may not have been approximated well by ResoNet. This is evident from the difference between FDM and ResoNet in the high-frequency region shown in Fig. 10. Thus, similar to other PINNs, ResoNet accurately analyzes in the low-frequency domain; however, its accuracy degrades in the high-frequency domain. This could be improved by modifying the structure of the neural network and learning method, and this remains a subject for future studies.
The training time was 5267 s; however, the trained ResoNet could run a simulation in 0.86 s, which is approximately four times faster than FDM's 3.67 s. Additionally, because ResoNet is a PINN-based method, it allows meshless analysis.
### Inverse analysis
Additionally, we performed an inverse analysis on the acoustic tube, as shown in Fig. 1, using the boundary conditions in Section II.2. The forced-flow waveform and physical properties were identical to those described in Section IV.1.1.
We considered two specific situations for inverse analysis: first, the identification of energy loss coefficients, and second, the design optimization of acoustic tubes. The information provided to ResoNet has two waveforms: a flow waveform at \(x=0\) and a sound pressure waveform at \(x=L\).
## IV Additional loss function for inverse analysis
The loss function for the sound pressure waveform at \(x=l\) was introduced into ResoNet using the following procedure: First, the output \(\hat{\phi}_{i,M}\) is defined as:
\[\hat{\phi}_{i,M}:=F_{w}\left(x_{l},t_{i};\Theta_{w}\right),\quad t_{i}\in[0,T ]\,, \tag{32}\]
where \(x_{l}=l\). The loss function for the sound pressure at \(x=l\) is defined as:
\[L_{M}=\frac{1}{N_{M}}\sum_{i=1}^{N_{M}}\left(\hat{p}_{i,M}-\bar{p}_{i,M} \right)^{2}, \tag{33}\]
where \(N_{M}\) denotes the number of collocation points for the loss. \(\hat{p}_{i,M}\) was obtained from \(\hat{\phi}_{i,M}\) by using Eq. (23) and \(\bar{p}_{i,M}\), the measured sound pressure waveform, was obtained from the analysis results of FDM simulation. The waveform is shown in Fig. 12. The loss function for the entire network is defined as:
\[L_{all}=\lambda_{E}L_{E}+\lambda_{B}L_{B}+\lambda_{P}L_{P}+\lambda_{C}L_{C}+ \lambda_{M}L_{M}, \tag{34}\]
where \(\lambda_{M}\) is the weight parameter of \(L_{M}\).
In the inverse analysis, the number of nodes in the neural network \(N_{f}\) was set to 400 and the number of FC blocks \(N_{b}\) was set to two.
## V Case 1: Identification of energy loss coefficients
Assuming that the energy-loss coefficients follow Eqs. (8)-(9) and \(\omega_{c}\) in these equations is unknown, we performed an inverse analysis to determine \(\omega_{c}\). As described at the beginning of section IV.2, the flow velocity waveform at \(x=0\) (Fig. 6) and the sound pressure waveform at position \(x=l\) (Fig. 12) are given to ResoNet. Because \(w_{c}\) is a trainable parameter of the neural network, the optimization problem was formulated as:
\[\min_{\Theta_{w},\Theta_{r},\omega_{c}}L_{all}(\Theta_{w},\Theta_{r},\omega_{ c}). \tag{35}\]
Figure 11: (Color online) Sound pressure waveforms at \(x=l\) and errors in the same time scale.
Figure 12: (Color online) Sound pressure waveform at \(x=l\) (obtained by FDM simulation).
The identification results are shown in Fig. 13. The initial value of \(\omega_{c}\) was set to \(1.3149\times 10^{3}\) (20% error) for a true value of \(1.6437\times 10^{3}\); however, after 100,000 training epochs, the value converged to \(1.6671\times 10^{3}\), which indicated that \(\omega_{c}\) could be identified with an error of 1.01%.
## 3 Case 2: Design optimization of acoustic tube
This section describes the design optimization of the length \(l\) and diameter \(d\) of the acoustic tube that simultaneously satisfy the flow velocity waveform at \(x=0\) and the sound pressure waveform at \(x=l\). The flow velocity waveform at \(x=0\) in Fig. 6 and the sound pressure waveform at \(x=l\) in Fig. 12 were given to ResoNet. Because \(l\) and \(d\) are the trainable parameters of the neural network, the optimization problem was formulated as:
\[\min_{\Theta_{w},\Theta_{r},l,d}L_{all}(\Theta_{w},\Theta_{r},l,d). \tag{36}\]
The identification results are shown in Fig. 14. The initial value of \(l\) was set to 0.8 (20% error) for an optimal value of 1 and the initial value of \(d\) was set to 8 (20% error) for an optimal value of 10. Table 2 indicates that \(l\) and \(d\) were identified with high accuracy with respect to the optimal values after 100,000 training epochs.
## 4 Conclusion
In this study, we proposed ResoNet, a PINN for analyzing acoustic resonance, and demonstrated its effectiveness by performing a time-domain analysis of acoustic resonance by introducing a loss function for periodicity into a neural network.
The forward analysis performed using an acoustic tube of length 1 m, which is the scale of a musical instrument or car muffler, revealed that acoustic resonance analysis could be performed with sufficient accuracy in the time domain. The analysis accuracy decreased with abrupt changes in the sound pressure waveform and also in the high-frequency region in the frequency domain. Given that this is due to the function approximation capability of the neural network, designing a PINN structure that is more suitable for acoustic analysis is a topic for future studies. The trained ResoNet can perform simulations approximately four times faster than FDM, and as a PINN-based method, ResoNet offers the advantage of meshless analysis.
For the inverse analysis, the identification of energy loss coefficient in the acoustic tube and the design optimization of the acoustic tube were performed. In these inverse problems, the true and optimal values could be identified with high accuracy from the waveform data at the endpoints of the acoustic tube.
These results demonstrate that ResoNet enables analyzing acoustic inverse problems without requiring the advanced creation of specialized analytical models for each problem. This highlights the potential for broad applicability not only for parameter identification but also to other inverse problems related to 1D acoustic tubes, such as the design optimization of musical instruments and glf. In future work, we intend to address these acoustic inverse problems using ResoNet.
###### Acknowledgements.
This work was supported by JSPS KAKENHI Grant Number JP22K14447.
## Appendix A Derivation of wave equation
This section describes the derivation of the wave equation using the energy-loss terms (Eq. (7)). First, the telegrapher's equation in the time domain (Eq. (3)
\begin{table}
\begin{tabular}{c c c} \hline \hline & Optimal & Identified \\ \hline Length \(l\) [m] & 1 & 1.0022 (0.22\% error) \\ \hline Diameter \(d\) [mm] & 10 & 10.009 (0.09\% error) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Optimal and identified value of \(l\) and \(d\).
Figure 14: (Color online) Identification result of \(l\) and \(d\).
Figure 13: (Color online) Identification result of \(\omega_{c}\).
and (4)).
\[\frac{\partial u}{\partial x} =-Gp-\frac{A}{K}\frac{\partial p}{\partial t}, \tag{20}\] \[\frac{\partial p}{\partial x} =-Ru-\frac{\rho}{A}\frac{\partial u}{\partial t}. \tag{21}\]
The velocity potential is defined as:
\[u=-A\frac{\partial\phi}{\partial x}. \tag{22}\]
By substituting Eq. (22) into Eq. (21), we obtain:
\[p=RA\phi+\rho\frac{\partial\phi}{\partial t}. \tag{23}\]
By substituting Eqs. (22) and (23) into Eq. (20), we obtain:
\[\begin{split}-\frac{\partial}{\partial x}\left(A\frac{\partial \phi}{\partial x}\right)&=-G\left(RA\phi+\rho\frac{\partial\phi }{\partial t}\right)\\ &\quad-\frac{A}{K}\frac{\partial}{\partial t}\left(RA\phi+\rho \frac{\partial\phi}{\partial t}\right).\end{split} \tag{24}\]
By expanding and simplifying Eq. (24), we obtain the following wave equation with energy loss terms.
\[\frac{\partial^{2}\phi}{\partial x^{2}}+\frac{1}{A}\frac{\partial A}{\partial x }\frac{\partial\phi}{\partial x}=GR\phi+\left(\frac{G\rho}{A}+\frac{RA}{K} \right)\frac{\partial\phi}{\partial t}+\frac{\rho}{K}\frac{\partial^{2}\phi}{ \partial t^{2}}. \tag{25}\]
|
2304.05078 | TodyNet: Temporal Dynamic Graph Neural Network for Multivariate Time
Series Classification | Multivariate time series classification (MTSC) is an important data mining
task, which can be effectively solved by popular deep learning technology.
Unfortunately, the existing deep learning-based methods neglect the hidden
dependencies in different dimensions and also rarely consider the unique
dynamic features of time series, which lack sufficient feature extraction
capability to obtain satisfactory classification accuracy. To address this
problem, we propose a novel temporal dynamic graph neural network (TodyNet)
that can extract hidden spatio-temporal dependencies without undefined graph
structure. It enables information flow among isolated but implicit
interdependent variables and captures the associations between different time
slots by dynamic graph mechanism, which further improves the classification
performance of the model. Meanwhile, the hierarchical representations of graphs
cannot be learned due to the limitation of GNNs. Thus, we also design a
temporal graph pooling layer to obtain a global graph-level representation for
graph learning with learnable temporal parameters. The dynamic graph, graph
information propagation, and temporal convolution are jointly learned in an
end-to-end framework. The experiments on 26 UEA benchmark datasets illustrate
that the proposed TodyNet outperforms existing deep learning-based methods in
the MTSC tasks. | Huaiyuan Liu, Xianzhang Liu, Donghua Yang, Zhiyu Liang, Hongzhi Wang, Yong Cui, Jun Gu | 2023-04-11T09:21:28Z | http://arxiv.org/abs/2304.05078v1 | # TodyNet: Temporal Dynamic Graph Neural Network for Multivariate Time Series Classification
###### Abstract
Multivariate time series classification (MTSC) is an important data mining task, which can be effectively solved by popular deep learning technology. Unfortunately, the existing deep learning-based methods neglect the hidden dependencies in different dimensions and also rarely consider the unique dynamic features of time series, which lack sufficient feature extraction capability to obtain satisfactory classification accuracy. To address this problem, we propose a novel temporal dynamic graph neural network (TodyNet) that can extract hidden spatio-temporal dependencies without undefined graph structure. It enables information flow among isolated but implicit interdependent variables and captures the associations between different time slots by dynamic graph mechanism, which further improves the classification performance of the model. Meanwhile, the hierarchical representations of graphs cannot be learned due to the limitation of GNNs. Thus, we also design a temporal graph pooling layer to obtain a global graph-level representation for graph learning with learnable temporal parameters. The dynamic graph, graph information propagation, and temporal convolution are jointly learned in an end-to-end framework. The experiments on 26 UEA benchmark datasets illustrate that the proposed TodyNet outperforms existing deep learning-based methods in the MTSC tasks.
Multivariate time series classification (MSTC), Dynamic graph, Graph neural networks (GNN), Graph pooling.
## I Introduction
Multivariate Time Series (MTS) is ubiquitous in a wide variety of fields as a significant type of data, ranging from action recognition [1], health care [2] to traffic [3], energy management [4], and other real-world scenario [5, 6, 7]. In the last decade, time series data mining has gradually become an important research topic, with the development of data acquisition and storage technology. The mining of time series data mainly covers three tasks: classification, forecasting, and anomaly detection. Multivariate Time Series Classification (MTSC) is the problem of assigning a discrete label to a multivariate time series, which assists people to judge the situation of the happened events.
Multivariate time series classification is challenging. On the one hand, the characterization of MTS is more complex due to its various temporal order correlations and high dimensionality, compared to the tasks of univariate time series classification. On the other hand, unlike temporally invariant classification tasks, the hidden information contained in MTS is more abundant and more difficult to be mined.
Numerous approaches have been proposed to settle the issue of MTSC over the years. Traditional methods have focused on distance similarity or features, such as Dynamic Time Warping with k-Nearest Neighbor (DTW-kNN) [8] and Shapelets [9], which have been validated to be effective on many benchmark MTS datasets. Nonetheless, the above-mentioned methodologies have to make the effort to data pre-processing and feature engineering due to the extraordinary difficulty of feature selection from the huge feature space. They fail to adequately capture the temporal dynamic relationships within the time series of each variable.
Recent research has turned to deep learning when solving end-to-end MTSC tasks because of the widespread applications of deep Convolutional Neural Networks (CNN) [10], whose benefit is that deep learning-based method can learn low-dimensional features efficiently rather than dealing with huge amounts of feature candidates. Many methods utilized for MTSC adapted to the multivariate case by converting the models originally designed for univariate time series. Fully Convolutional Network (FCN) [11] has a better overall ranking compared with traditional approaches. MLSTM-FCN [12] augments the FCN with a squeeze-and-excitation block to achieve better performance of classification. In addition, some approaches are specifically dedicated to MTSC by learning the latent features. These types of deep learning-based strategies have shown gratifying results in MTSC tasks. Unfortunately, current deep-learning-based models for the issue of MTSC rarely consider the hidden dependency relationships between different variables.
The dependency relationships can be naturally modeled as graphs. In recent years, Graph Neural Networks (GNNs) have been successfully utilized to processing of graph data by their powerful learning capability for graph structure. The neighborhood information of each node can be captured through the diverse structural information propagation of the graph neural network. Thrillingly, each variable can be characterized as a node in a graph for MTS, which are interconnected through hidden dependencies. Therefore, it is a promising way that models MTS data by graph neural networks to
mine the implicit dependencies between variables on the time trajectory. Spatial-temporal graph neural networks take an external graph structure and time series data as inputs, which can significantly improve the performance of the methods that do not exploit graph structural information [13]. However, only one graph is constructed over the entire temporal trajectory, which limits and neglects the impact of dynamic processes on representational ability. Thus, the following key issues and challenges should be valued.
* _Mining Hidden Dependencies._ Most models for MTSC focus on the extraction of inter-variable features but neglect the possibility that implicit dependencies between different variables. The question then is how to design an effective framework to capture these information.
* _Dynamic Graph Learning._ Most existing GNNs approaches rely heavily on predefined graph structures and only emphasize message propagation without considering the dynamic properties of time series data. Thus, how to dynamically learn the spatial-temporal features and graph structure of MTS is also an important issue.
* _Temporal Graph Pooling._ Most deep learning methods for MTSC pool a large number of dimensions directly at the end of models, which is called flat and may cause missing information. Some graph pooling methods have used GNNs to alleviate it [14, 15, 16], however, they did not consider the internal temporal features.
To address the above issues, we propose a novel **T**emporal **D**ynamic Graph Neural **N**etwork (TodyNet) for multivariate time series classification. For _mining hidden dependencies_, we propose a novel end-to-end framework that discovers the dependence correlations between variables, characteristics within variables, and spatial-temporal dependencies of variables by graph construction and learning, temporal convolution, and dynamic graph neural network, respectively. For _dynamic graph learning_, the internal graph structure can be learned by gradient descent, and we design a graph transform mechanism to propagate the dynamic properties of multivariate time series. For _temporal graph pooling_, a hierarchical temporal pooling approach is proposed to avoid the flat and achieve high performance. In summary, the main contributions of our work are as follows:
* _Pioneering._ To the best of our knowledge, this is the first study for multivariate time series classification based on the temporal dynamic graph.
* _Novel Joint Framework._ An end-to-end framework for MTSC is proposed to jointly learn the dynamic graph, graph propagation, and temporal convolution.
* _Dynamic Graph Mechanism._ A dynamic graph processing mechanism is proposed to capture the hidden dynamic dependencies among different variables and between adjacent time slots and enhance the effectiveness of graph learning.
* _Temporal Graph Pooling._ A brand-new temporal graph pooling is presented to avoid the flat of pooling methods, and combined with temporal features extraction simultaneously.
* _Remarkable Effect._ Experimental results show that our method significantly improves the performance of mainstream deep learning classifiers and outperforms state-of-the-art methods.
The remainder of this paper is organized as follows. Section II briefly introduces the related works. Section III is devoted to presenting the problem formulation, the notation, and the definition of the related concept. The details of the proposed TodyNet are described in Section IV. The experimental results and ablation studies are illustrated in Section V, which is followed by the conclusion.
## II Related Work
In this section, we briefly summarize the recent advances in multivariate time series classification tasks and spatial-temporal graph neural networks.
### _Multivariate Time Series Classification_
The multivariate time series classification problem expects to utilize time series data with multivariate features to accurately predict a number of specific classes [17, 18]. In general, distance-based, feature-based, and deep-learning-based methods are the three major types of approaches to address MTSC tasks. Multichannel Deep Convolutional Neural Network (MCDCNN) [19] captures features within variables by 1D convolution and then combined them with a fully connected layer, which is a pioneering approach that applied CNN to MTSC. Inspired by this, many effective network architectures began to spurt erupted. Time Series Attentional Prototype Network (TapNet) [20] learns the low-dimensional features by a random group permutation method and constructs an attentional prototype network to overcome the issue of limited training labels. Currently, OS-CNN [10] is the latest model that can achieve state-of-the-art performance by designing an Omni-Scale block to cover the best receptive field size across different datasets. Nevertheless, deep-learning-based models have obvious limitations. They assume that the dependencies between different variables are the same, resulting in the pairwise relationships between variables cannot be effectively represented. In this case, the graph is the most appropriate data structure to model the MTS.
### _Graph Neural Networks_
Since the concept of Graph Neural Networks (GNNs) was introduced [21], there has been an explosive development in the processing of graph information. GNNs follow a local aggregation mechanism in that the embedding vector of each node can be computed by recursively aggregating and transforming the information of its neighbors. In recent years, various variants of GNNs have been proposed [22, 23, 24, 25]. For example, the graph convolution network (GCN) [26] is a representative work that extends convolution to the spectral domain by finding the corresponding Fourier basis. GraphSAGE [27] and the Graph Attention Network (GAT) [28] are also typical approaches that generalize typical convolution to spatial neighbors. In order to simultaneously meet the need for spatial and temporal dimensions of the data, a new family
of GNNs has been born, i.e. spatial-temporal GNNs, which can learn the graph structure over temporal trajectories [29, 30, 31]. For example, [32] proposed a new model named STFGN for fusing newly generated graphs and given spatial graphs to capture hidden spatial dependencies and learn spatial-temporal features. Regretfully, it is difficult to find a predefined graph structure for the MTSC tasks. In addition, most of the current spatial-temporal GNNs are designed for traffic forecasting, there is a gap in the MTSC tasks that need to be filled.
## III Preliminaries
This section defines the key concept and notations of this paper. To begin with, we formulate the problem that we are interested in, multivariate time series classification.
### _Problem Formulation_
A Multivariate Time Series (MTS) \(X=\{x_{1},x_{2},...,x_{d}\}\in\mathbb{R}^{d\times l}\) denote the value of multivariate variable of dimension \(d\in\mathbb{N}^{*}\) at the time series with \(x_{i}=\{x_{i1},x_{i2},...,x_{il}\}\), \(i=1,2,..,d\), where \(l\in\mathbb{N}^{*}\) denote the length of the multivariate time series. Given a group of multivariate time series \(\chi=\{X_{1},X_{2},...,X_{m}\}\in\mathbb{R}^{m\times d\times l}\) and the corresponding labels \(\eta=\{y_{1},y_{2},...,y_{m}\}\), where \(y\) is a predefined class label of each multivariate time series, and \(m\in\mathbb{N}^{*}\) is the number of time series. The MTSC tasks are dedicated to predicting unlabeled multivariate time series by training a classifier \(f(\cdot)\) from \(\chi\) to \(\eta\).
### _Graph Notations and Definitions_
**Graph-related Concepts.** Graph generally describes relationships among entities in a network. In the task of multivariate time series classification, we give formal definitions of temporal graph-related concepts below.
**Definition 1** (Temporal Graph).: _The temporal graph is constructed by multivariate time series. A temporal graph is given in the form \(G=(V,E)\) where \(V=\{v_{1},...,v_{n}\}\) is the set of nodes with a number of \(n\), and \(E\) is the set of edges. The nodes denote the variables, and the edges denote the relationship among different variables defined as similarity or some structure learning results._
**Static and Dynamic Graph.** The static and dynamic graphs are defined by the temporal dependency of graph construction. Each multivariate time series \(X\) is segmented averagely by some isometric time slots \(T=\{T_{1},T_{2},...,T_{S}\}\) in chronological order, where \(S\in\mathbb{N}^{*}\) is the number of time slot. Generally, each time slot is denoted by \(T_{i}=\{t_{1+(i-1)s},...,t_{is}\}\), \(i=1,...,s\), \(|T_{i}|=s>0\). Given a set of temporal graphs \(G_{T}=\{V,E_{T}\}\) with the same nodes but different edges at each time slot, where \(G_{T}=\{G_{T_{1}},G_{T_{2}},...,G_{T_{S}}\}\), \(E_{T}=\{E_{T_{1}},E_{T_{2}},...,E_{T_{S}}\}\).
**Definition 2** (Static Graph).: _The static graph is a temporal graph for each multivariate time series with \(S=1\), \(i.e.\) there is no segmentation existence for each multivariate time series, and \(s\) is equal to the length of the whole time series. The size of sets \(|G_{T}|=|E_{T}|=1\)._
**Definition 3** (Dynamic Graph).: _The dynamic graph is a set of temporal graphs for each multivariate time series with \(S>1\), \(i.e.\) edges of the temporal graph have dynamic variations at different time slots. The size of sets \(|G_{T}|=|E_{T}|>1\)._
The dynamic variations of edges mean that the weight of
Fig. 1: The framework of TodyNet. We first split the input time series into \(s\) slices, and generate a dynamic graph for each slice. The dynamic graph neural network modules and temporal convolution modules capture spatial and temporal dependencies separately. Afterward, the temporal graph pooling module clusters nodes together with learnable temporal parameters at each layer. The output layer processes concatenate hidden features for final classification results.
edges and the connection relationship of vertices changes on different time slots while vertices stay statically constant for each multivariate time series.
## IV Proposed Model
In this section, we will introduce our new proposed TodyNet model. We first introduce the overall framework of TodyNet in Section IV-A. Then we demonstrate each component of our model in Section IV-B through Section IV-E.
### _Model Architecture Overview_
We first introduce the general framework of our proposed deep learning-based model TodyNet briefly. As illustrated in Figure 1, TodyNet is composed of the following core modules. To establish the initial relationships between different dimensions, a _graph construction and learning module_ generates a set of graph adjacency matrices, which correspond to different time slots respectively. And the elements of graph adjacency matrices can be learned with the iteration process. _\(k\) Dynamic graph neural network modules_ consists of dynamic graph transform and dynamic graph isomorphism network. Dynamic graph transform exploits the dynamic associations between different temporal graphs, whose results are implied in the new adjacency matrices. Dynamic graph isomorphism network processes time series data with \(k\)_temporal convolution modules_, which captures spatial-temporal dependencies of multivariate time series.
To avoid the flat pooling problem, the _temporal graph pooling module_ designed a differentiable and hierarchical graph pooling approach with learnable temporal convolution parameters. To obtain the final classification result, the _output module_ uses average pooling and a fully connected layer to get the values for each category. Our model will be introduced in detail in the following subsections.
### _Construction and Learning of Temporal Dynamic Graph_
Since there is no predefined graph structure for universal time series data, we will start by demonstrating a general graph-constructed module that generated adjacency matrices to create the initial associations of dimensions. Besides, the hidden dependencies between variables are represented by graph adjacency matrices and optimized with training iterations. For the sake of simplicity, we do not establish this relationship through time series data and encode the adjacency matrices by "shallow" embedding. Each node is assigned two values, which represent the source and target nodes, respectively. Therefore, we generated two vectors \(\Theta\) and \(\Psi\) with length \(d\) for each time slot \(t\), and all elements are learnable parameters that are initialized randomly. Then we computed the multiplication of \(\Theta^{\mathrm{T}}\) and \(\Psi\) whose result will be regarded as the initial adjacency matrix for a time slot. And the values of the adjacency matrix can be optimized by training. The graph construction is illustrated as follows:
\[A =\Theta^{\mathrm{T}}\cdot\Psi \tag{1}\] \[idx,idy =argtopk(A[:,:])\quad idx\neq idy\] (2) \[A[-idx,-idy]=0 \tag{3}\]
where, \(\Theta=[\theta_{t,1},\theta_{t,2},...,\theta_{t,d}]\), \(\Psi=[\psi_{t,1},\psi_{t,2},...,\psi_{t,d}]\) represent the random initialization of learnable node embeddings, and \(argtopk(\cdot)\) returns the indices of the top-k largest values of the adjacency matrix \(A\). The adjacency matrix is sparsized by Equation 2-3, which enables reducing the computation cost. For the adjacency matrix of each time slot, we only preserve the elements with the top-k largest weights and set other values as zero.
### _Dynamic Graph Neural Network_
We purpose to explore the spatial relationships between different time series dimensions and represent the interaction of their features in graphs. The dynamic graph neural network module focuses on message passing and feature aggregating between nodes. Though some GNNs models are able to propagate neighbor messages, the existing methods can only operate on static graphs. In this paper, we propose a novel model that processes message propagation for dynamic graphs based on an improved Graph Isomorphism Network (GIN).
**Dynamic Graph Transform.** We design a transformation on the set of dynamic graphs, which establishes the associations between different graphs. The basic assumption behind these time slots is that the later time slots data is transformed from the data in the earlier time slots. To discretization, we attempt to construct a connection with the graph corresponding to the previous time slot for each graph. The structure of Dynamic Graph Transform(DGT) is illustrated in Figure 2.
For each graph of time slot, the same number of vertices are added which characterize the data of the previous time slot graph of the corresponding vertices respectively, except for the first graph. Therefore, the set of vertices will be modified to \(\{v_{(t,1)},\ v_{(t,2)},...,\ v_{(t,N)},\ v_{(t-1,1)},\ v_{(t-1,2)},...,\ v_{(t-1,N)}\}\). We assign orientations (_i.e. directed edges_) from the previous time vertices to their counterparts at the current time, which signifies adding connections from \(v_{(t-1,n)}\) to \(v_{(t,n)}\) for \(n=1,2,...,N\). Then the latest generated set of graphs is able to be transported to the next steps.
Fig. 2: Dynamic Graph Transform. The latter graph aggregates information from the previous graph for corresponding nodes.
Actually, it is unnecessary to double the number of vertices in practical implementation. We can aggregate source nodes embedding to target nodes embedding directly for new directed edges, then delete the new vertices.
**Dynamic Graph Isomorphism Network (DyGIN).** Graph Isomorphism Network(GIN) is one of the powerful GNNs, which can discriminate different graph structures that other popular GNNs variants cannot distinguish. Motivated by GIN, we designed a novel method to aggregate information that consists of a dynamic paradigm through parallelism as Equation 5. In contrast to static GNNs, dynamic GNNs separate different time slot data in the same dimension completely after DGT. Actually, the same vertex at different time slots will aggregate information from a different set of vertices in general. The DyGIN can be defined as:
\[h_{v}^{(l,\,t)}=MLP^{(l,\,t)}\Big{(}\big{(}1+\epsilon^{(l)}\big{)} \cdot h_{v}^{(l-1,\,t)}+h_{v}^{(l-1,\,t-1)}+\] \[\sum_{u\in\mathcal{N}(v)}\tilde{\omega}_{ij}\cdot h_{u}^{(l-1,\,t )}\Big{)} \tag{4}\] \[h_{v}^{(l)}=CONCAT\Big{(}h_{v}^{(l,\,t)}\ \left|\ \ t=0,1,...,T\right. \tag{5}\]
where, \(h_{v}^{(l,\,t)}\) represents the output of GIN for node \(v\) at \(t\) time slot in \(l\) layer, \(h_{v}^{(l-1,\,t-1)}\) is the simple implementation for DGT, which is available for \(t>2\) (the second and subsequent graphs). The edge weight, \(\omega_{ij}\), is normalized to \(\tilde{\omega}_{ij}\). We can regard \(\epsilon\) as a learnable parameter.
We also provide the adjacency matrix form:
\[H^{l}=MLP\Big{(}\big{(}\tilde{A}+(1+\epsilon^{l})\cdot I\big{)}\cdot H^{l-1}+ H^{l-1}[t_{1}:t_{T-1}]\Big{)} \tag{6}\]
where, \(H^{l-1}\) represents the output tensor of \(l\)-th GIN layer, \(H^{l-1}[t_{1}:t_{T-1}]\) is aligned from the second time slot data, \(\tilde{A}\) is normalized through \(D^{-\frac{1}{2}}AD^{\frac{1}{2}}\), and \(D\) is the degree matrix of \(A\).
### _Temporal Graph Pooling_
In general, pooling is a necessary procedure after the classifiers have generated new features. However, some universal approaches of pooling, such as max pooling, mean pooling, and sum pooling, aggregate a vector even a matrix of features into a few features, which is _flat_ and may lose much information. To the best of our knowledge, there is no such method that deals with this issue in time series analysis. In this section, we propose a novel Temporal Graph Pooling (TGP) that combined graph pooling and temporal process, which can alleviate the difficulty.
Temporal Graph Pooling provided a solution through the _hierarchical_ pooling method. The core idea is to Learn to assign nodes to clusters, which can control the number of reducing nodes. We applied it hierarchically after each GNN layer, as illustrated in Figure 3.
**Pooling with convolution parameters.** A 2-dimensional convolution neural network(CNN) layer is designed to concentrate nodes into clusters according to the given parameter _pooled ratio_, and nodes are considered as the channels that extract features in CNN. In order not to break the window of the receptive field, the kernel size of temporal convolution is assigned to this convolution kernel size. \(X_{l}\) represents the input node embedding tensor at \(l\)-th layer, the CNN to compute the output embedding tensor \(X_{l+1}\) can be denote as:
\[X^{l+1}=\sum_{j=0}^{N^{l}-1}\text{weight}(N^{l+1},j)\star X^{l}+\text{bias}(N ^{l+1}) \tag{7}\]
where, \(\star\) is the valid 2D cross-correlation operator, \(N\) denotes the number of nodes in or out pooling module, note that \(N^{l}\) equals to in_nodes and \(N^{l+1}\) equal to out_nodes for \(l\)-th layer.
After the generation of output tensor \(X^{l+1}\), we consider the approach to compute the corresponding adjacency matrix. There is an observation that the shape of learnable weights \(W^{l}\) is [\(N^{l+1}\), \(N^{l}\), 1, kernel_size], then the vector \(V^{l}\in\mathbb{R}^{1\times k}\) composed of learnable parameters at layer \(l\) is generated. A learnable assigned matrix \(M^{l}=W^{l}\cdot V^{l}\in\mathbb{R}^{N^{l+1}\times N^{l}}\) can be obtained that the rows correspond to \(N^{l+1}\) nodes or clusters and the columns corresponds to \(N^{l}\) clusters. Given matrix \(M\) and the adjacency matrix \(A^{(l)}\) for input data at this layer, the following equation is utilized to generate output adjacency matrix \(A^{(l+1)}\):
\[A^{(l+1)}=M^{(l)}A^{(l)}M^{(l)^{T}}\in\mathbb{R}^{N_{l+1}\times N_{l+1}} \tag{8}\]
Equation 7-8 provides the overall steps of Temporal Graph Pooling(TGP). In Equation 7, \(X^{l+1}\) represents the output clusters embeddings after aggregating input embeddings. In Equation 8, \(A^{(l+1)}\) denotes the connection relationships and corresponding weights of new clusters. In addition, each element \(A^{(l+1)}_{ij}\) indicates the connected weight between \(i\) and \(j\). Therefore, TGP implements the hierarchical and differentiable graph pooling with temporal information, while optimizing the clusters aggregating approach during training.
### _Temporal Convolution_
The temporal convolution (TC) module focuses on the capture of temporal dependencies within each dimension. In contrast to other deep learning methods, to highlight the advantage of graph learning modules, there is no complex
Fig. 3: An abstract illustration of Temporal Graph Pooling. At each layer, we utilize time convolution to cluster nodes and extract the temporal features. Then we reconstruct adjacency matrices through convolution weights.
processing in this module. The 3 convolution neural network layers with different convolution kernels are employed and neither _dilation_ nor _residual connection_ are used for simplicity. However, _Padding_ is applied to extend the length of the time series, which makes the output of time convolution equal in length to the input data. The output tensor of temporal features extracting will be transported to the GNN module after dividing according to the corresponding time slots.
## V Experimental Studies
In this section, extensive experiments have been conducted on the UEA benchmark datasets for multivariable time series classification to show the performance of TodyNet. Furthermore, we validate the performance of key components of TodyNet with a series of ablation experiments. Moreover, we visualize the class prototypes and time series embeddings to illustrate the excellent representation capabilities of TodyNet.
### _Experimental Settings_
**Datasets.** UEA multivariate time series classification archive1 collected from different real-world applications covers various fields. We exclude the datasets with unequal length or missing values in the UEA multivariate time series classification archive [33], to ensure the rationality of experiments. Thus, we use all 26 equal-length datasets of the 30 total and most implementations are similar setups in other studies. The details statistics of the benchmark datasets are as follows: The UEA multivariate time series classification archive is composed of real-world multivariate time series data, and we choose 26 data of equal length in the time series of each dimension as the datasets. The length range of the datasets is from 8 to 17,984, and the range of dimension values is from 2 to 1345. The details of each dataset are shown in Table I.
Footnote 1: [https://www.timeseriesclassification.com](https://www.timeseriesclassification.com)
**Baselines.** To validate the performance of TodyNet, the state-of-the-art or most popular approaches are selected as baselines for comparison. Baseline methods for multivariate time series classification are summarized as follows:
(1) **OS-CNN** and **MOS-CNN**[10]: One-dimensional convolutional neural networks covered the receptive field of all scales, which are the latest and best deep learning-based methods achieved the highest accuracy for time series classification.
(2) **ShapeNet**[1]: The latest shapelet classifier that embeds shapelet candidates with different lengths into a unified space.
(3) **TapNet**[20]: A novel model that combines the benefits of traditional and deep learning, which designs a framework containing a LSTM layer, stacked CNN layer, and attentional prototype network.
(4) **WEASEL+MUSE**[34]: The most effective bag-of-patterns algorithm that builds a multivariate feature vector.
(5) **WLSTM-FCN**[12]: A famous deep-learning framework obtained the representations by augmenting LSTM-FCN with squeeze-and-excitation block.
(6) **ED-1NN**[10]: One of the most popular baselines based on Euclidean Distance and the nearest neighbor classifier.
(7) **DTW-1NN-I**[10]: One of the most commonly used baselines that process each dimension independently by dynamic time warping with the nearest neighbor classifier.
(8) **DTW-1NN-D**[10]: Another similar baselines that process all dimensions simultaneously.
**Implementation details.** In our model, the number of dynamic graphs for MTS is 4, the \(k\) value for top-\(k\) is set to 3 and the pooled ratio is 0.2. There are 3 layers of temporal convolution, and the convolution kernel size is 11,3,3. The channel size of temporal convolution is 64,128,256. The batch size is set to
16, and the learning rate is set to \(10^{-4}\). We tune the kernel size for a few datasets due to the large differences in the dimension and length of each dataset. All the experiments are implemented on Pytorch 1.11.0 in Python 3.9.12 and trained with 2,000 epochs (computing infrastructure: Ubuntu 18.04 operating system, GPU NVIDIA GA102GL RTX A6000 with 48 Gb GRAM)2.
Footnote 2: The source code is available at [https://github.com/linux1011/TodayNet](https://github.com/linux1011/TodayNet)
**Evaluation metrics.** We evaluate the performance of multiple classifiers over multiple test datasets by computing the accuracy, average accuracy, and the number of Wins/Draws/Losses. In addition, we also construct an adaptation of the critical difference diagrams [35], replacing the posthoc Nemenyi test with the pairwise Wilcoxon signed-rank tests, and cliques formed using the Holm correction recommended [36, 37].
### _Hyperparameter Stability_
We conducted experiments on the 12 randomly selected datasets of the UEA benchmark by changing the number of dynamic graphs \(n_{G}\), pooled ratio \(\eta\), and learning rate \(lr\) to evaluate the hyperparameter stability of TodyNet. Noteworthy, the number of dynamic graphs is corresponding to the number of slicing time series. _AWR_, _AF_, _BM_, _CR_, _ER_, _FM_, _HMD_, _HB_, _NATO_, _SRS1_, _SRS2_, and _SWJ_ are chosen as the datasets for the experiments in this section. In Figure 4, the standard boxplots show the average classification accuracies of all datasets with respect to changes in \(n_{G}:\{2,4,6,8,10,12,14,16\}\), \(\eta:\{0.5,0.1,0.15,0.2,0.25,0.3,0.5\}\) and \(lr:\{10^{-1},10^{-2},10^{-3},10^{-4},10^{-5}\}\). Note that the blue curve in the figure is the average accuracy of TodyNet over the selected 12 datasets with respect to the variation of parameters.
Overall, the performance of the model remains at a high level. Apparently, the average accuracy tends to fluctuate slightly with the increase of \(n_{G}\) due to a coarse-grained approach that these datasets with large length differences are split into the same number of time slots. However, in terms of overall performance, the implicit dependencies extracted by TodyNet are very effective for the time series classification task. It is obvious that the performance of the model first grows and then decreases to varying degrees with the changes of \(\eta\), which is because the pooling structure will be flat if we reduce too many or very few nodes at one layer of our pooling approach. In addition, the results show that a suitable value of \(lr\) is beneficial to obtain higher classification accuracy. Thus, we set appropriate parameters for different datasets to achieve the best performance of the model.
### _Classification Performance Evaluation_
To validate the performance of TodyNet, we compared the proposed TodyNet with baselines on all benchmark datasets in Table II, and the major results are shown in Table II. The specific accuracies of the baselines all refer to the original papers or [10]. The result marked with "N/A" means that the corresponding method is unable to obtain the result due to memory or computational restriction. The best and second-best results are highlighted in bold and underlined, respectively.
Table II indicates that TodyNet achieves the highest classification accuracy on 13 datasets. In terms of average accuracy, TodyNet shows the best performance and stability of 0.726 compared with the state-of-the-art baselines. For the comparisons of the 1-to-1-Wins/Draws/Losses, TodyNet performs much better than all the baselines. In particular, compared to state-of-the-art methods, the dynamic graph mechanism of TodyNet has a significant improvement for some datasets, such as _FD_, _FM_, _HMD_, etc. Excitingly, these datasets belong to the type of "EEG" which is recorded from magnetoencephalography (MEG) and contains the tags of human behavior. It is easy to see that the hidden dependencies between brain signals at different locations jointly determine human behavior, and TodyNet is able to capture such spatio-temporal features well and shows great potential. Simultaneously, it indicates that the dynamic characteristics of time series also positively influence the results of the classification.
On the other hand, we also conducted the Wilcoxon signed-rank test to evaluate the performances of all approaches. Figure 5 shows the critical difference diagram with \(\alpha=0.05\) which is plotted according to the results in Table II. The values in Figure 5 also reflect the average performance rank
Fig. 4: Boxplot showing the accuracies on 12 UEA datasets vs. changes in the number of graphs \(n_{G}\), pool ratio \(\eta\), and learning rate.
Fig. 5: Critical difference diagram on the 26 UEA datasets with \(\alpha=0.05\).
of the classifiers. TodayNet achieves the best overall average rank of 3.2692, which is lower than the ranks of the existing state-of-the-art deep learning-based approaches, such as MOS-CNN, WEASEL+MUSE, and TapNet. Besides, TodayNet is significantly better than other classifiers, e.g., ShapeNet, MLSTM-FCN, and DTW. This is because the extraction of hidden dynamic dependencies can significantly improve the performance of classification.
### _Ablation Study_
In order to validate the effectiveness of the key components that contribute to the improved outcomes of TodayNet, we employ ablation studies on the 26 UEA multivariate time series archive. We denote TodyNet without different components as follows:
* **w/o Graph**: TodyNet without temporal graph and graph neural networks. We pass the outputs of the temporal convolution to the output layer directly.
* **w/o DyGraph**: TodyNet without the temporal dynamic graph. We only construct one static graph as one of the inputs of graph neural networks by using the mentioned graph construction method and do not slice the time series.
* **w/o GPool**: TodyNet without the temporal graph pooling. We directly concatenate the outputs of the graph neural networks and pass them to the output module.
The scatter plots are shown in Figure 6 that demonstrate the
Fig. 6: Scatter plot for TodyNet, TodyNet without graph mechanism, dynamic graphs, and TodyNet without temporal graph pooling on the 26 UEA MTSC problems.
experimental comparison results of reducing different components of TodyNet. The horizontal and vertical coordinates of Figure 6 indicate the classification accuracy, and the points falling in the pale turquoise region indicate that ToyNet is more accurate on the corresponding dataset.
From Figure 6, a conclusion can be drawn from the results that the graph information significantly improves the outcomes of convolution-based classifier relying on the excellent capability of TodyNet to capture the hidden dependencies among variables and dynamic associations between different time slots. We can also observe that the introduction of a dynamic graph mechanism can obtain better model performance than a static graph because it enables information to flow between isolated but interdependent time slots. In addition, the effect of temporal graph pooling is evident as well: it validates that temporal graph pooling helps aggregate hub data in a hierarchical way which enhances the performance of graph neural networks to a great extent. In a word, all results of ablation experiments manifest that the proposed components of TodyNet are all indeed effective for the multivariate time series classification tasks.
### _Inspection of Class Prototype_
In this experiment, we show the effectiveness of our well-trained time series embedding by visualizing the class prototype and its corresponding time series embedding. To begin with, we analyze the representation learned by TodyNet over epochs with a heatmap. The _SWJ_ is a dataset of UEA archive that records short-duration ECG signals of 4 pairs of electrodes from a healthy male performing 3 different physical activities: Standing, Walking, and Single Jump. We randomly selected 3 samples which are individually corresponding to one category from 15 test samples of _SWJ_ for heatmap visualization. In Figure 7, each row of subplots shows the original signal of all sensors and the learned embedding from each sample under the corresponding class, respectively. The outcomes indicate that there are significant differences in the embeddings learned from different categories. Thus, an obvious conclusion is that TodyNet can clearly distinguish the classes of different samples of time series by learning efficient representations.
To further demonstrate the ability of TodyNet to represent time series, we use the t-SNE algorithm [38] to embed the representation into the form of a two-dimensional image for visualization. Figure 8 shows the results on test datasets of _HB_, _HMD_, and _NATO_. For convenience, all class labels are unified by Arabic numerals. As shown in Figure 8, each row denotes different datasets, and each column represents the visualization of the original data and the embeddings learned by TodyNet, respectively. It is clear that the distance between different samples from the same class is closer, which implies that TodyNet can effectively characterize class prototypes and enables highly accurate classification for different data.
## VI Conclusion
In this paper, a novel framework named temporal dynamic graph neural network was proposed. To the best of our knowledge, this is the first dynamic graph-based deep learning method to address multivariate time series classification problems. We propose an effective method to extract the hidden dependencies among multiple time series and the temporal dynamic feature among different time slots. Meanwhile, a novel temporal graph pooling method is designed to overcome the flat of the graph neural network. Our method has shown superb performance compared with other state-of-the-art methods on
Fig. 8: The t-SNE visualization of the representation space for the datasets Heartbeat, HandMovementDirection, and NATOPS.
Fig. 7: Heatmap visualization of representations learned by TodyNet on the StandWalkJump
UEA benchmarks. For future directions, there are two possible ways. On the one hand, a better classification performance may be achieved by transplanting the other temporal convolution methods. On the other hand, reducing the complexity of temporal graph pooling can further improve the efficiency of the model.
|
2302.03787 | Deep Neural Network Uncertainty Quantification for LArTPC Reconstruction | We evaluate uncertainty quantification (UQ) methods for deep learning applied
to liquid argon time projection chamber (LArTPC) physics analysis tasks. As
deep learning applications enter widespread usage among physics data analysis,
neural networks with reliable estimates of prediction uncertainty and robust
performance against overconfidence and out-of-distribution (OOD) samples are
critical for their full deployment in analyzing experimental data. While
numerous UQ methods have been tested on simple datasets, performance
evaluations for more complex tasks and datasets are scarce. We assess the
application of selected deep learning UQ methods on the task of particle
classification using the PiLArNet [1] monte carlo 3D LArTPC point cloud
dataset. We observe that UQ methods not only allow for better rejection of
prediction mistakes and OOD detection, but also generally achieve higher
overall accuracy across different task settings. We assess the precision of
uncertainty quantification using different evaluation metrics, such as
distributional separation of prediction entropy across correctly and
incorrectly identified samples, receiver operating characteristic curves
(ROCs), and expected calibration error from observed empirical accuracy. We
conclude that ensembling methods can obtain well calibrated classification
probabilities and generally perform better than other existing methods in deep
learning UQ literature. | Dae Heun Koh, Aashwin Mishra, Kazuhiro Terao | 2023-02-07T22:56:09Z | http://arxiv.org/abs/2302.03787v4 | # Deep Neural Network Uncertainty Quantification for LArTPC Reconstruction.
###### Abstract
We evaluate uncertainty quantification (UQ) methods for deep learning applied to liquid argon time projection chamber (LArTPC) physics analysis tasks. As deep learning applications enter widespread usage among physics data analysis, neural networks with reliable estimates of prediction uncertainty and robust performance against overconfidence and out-of-distribution (OOD) samples are critical for their full deployment in analyzing experimental data.While numerous UQ methods have been tested on simple datasets, performance evaluations for more complex tasks and datasets are scarce. We assess the application of selected deep learning UQ methods on the task of particle classification using the PiLArNet [1] monte carlo 3D LArTPC point cloud dataset. We observe that UQ methods not only allow for better rejection of prediction mistakes and OOD detection, but also generally achieve higher overall accuracy across different task settings. We assess the precision of uncertainty quantification using different evaluation metrics, such as distributional separation of prediction entropy across correctly and incorrectly identified samples, receiver operating characteristic curves (ROCs), and expected calibration error from observed empirical accuracy. We conclude that ensembling methods can obtain well calibrated classification probabilities and generally perform better than other existing methods in deep learning UQ literature.
+
Footnote †: Corresponding author.
## 1 Introduction
Deep learning has largely established itself as a dominant method for machine learning applications, in part due to its competence in a variety of well-known tasks such as image recognition, natural language processing, and automated control applications. As such, scientists in both artificial intelligence and the physical sciences have been investigating ways to realize deep learning's success in more complex domains of fundamental research. The trend for integrating deep learning for physics data reconstruction has been particularly notable in experimental particle physics, where large data generation from particle detectors such as liquid argon time projection chambers (LArTPCs) and the Large Hadron Collider (LHC) naturally prepare fertile grounds for deep learning models.
Using deep learning for fundamental research, however, presents complications that are often omitted in many common industrial use cases, where practitioners generally attend to achieving state-of-the-art with respect to a family of conventional performance metrics. In particular, one of the most pressing issues with using deep neural networks for fundamental research is developing robust and consistent methods for quantifying uncertainties of its predictions. Deep neural networks are unable to recognize out-of-distribution examples and habitually make incorrect predictions with high confidence for such cases [2; 3]. Uncertainty in predictions has had serious consequences while applying deep learning to high-regret and safety-critical applications such as automated driving [4; 5; 6], law enforcement [7], medical sciences[8], etc. Overconfidence for out-of-distribution examples also
demonstrate the need for deep learning models to acknowledge whether a given prediction is to be trusted or not. Undoubtedly, for deep neural nets to be integrated into the physics measurement process, such characteristics of deterministic neural networks must be addressed by an effective method for uncertainty quantification (UQ).
As demand for UQ gradually escalated in domains such as autonomous driving and medicine, UQ methods diversified into a variety of different approaches under the name of Bayesian Deep Learning (BDL), but with scarce substantial application in the physical sciences. Moreover, most BDL methods have been benchmarked on simplified datasets (MNIST, CIFAR10), which are not representative of the complexity of physics data reconstruction process. Modern accelerator neutrino experiments such as ICARUS and DUNE offer ideal grounds for testing the efficacy of BDL in UQ, due to its recent adaptation and moderate success of deep learning based reconstruction techniques. The benefit derived from a detailed assessment of different UQ algorithms on a complex, multi-objective task such as LArTPC data reconstruction is two-fold: allow practitioners in machine learning to evaluate BDL's applicability in a real-world setting and enable physicists to design neural network that produce well justified uncertainty estimates for rejecting erroneous predictions and detecting out-of-distribution instances.
Practitioners of deep learning in LArTPC reconstruction agree on the need for calibrated uncertainty bounds for deep learning model predictions along with OOD robustness. However, numerous different uncertainty quantification algorithms have been proposed for deep learning. These range from empirical approaches (such as bootstrapped ensembles), Bayesian approaches (such as EDL, HMC) and hybrid approaches (such as MC Dropout). None of these have been tested for complex applications such as LArTPC reconstruction. In this investigation, we select the most promising uncertainty quantification approaches from each of these categories, test and evaluate them with respect to critical intermediate reconstruction tasks: particle classification and semantic segmentation. We first briefly summarize the different methodologies and discuss the apparent advantage/disadvantage of using each of the proposed models in the following section. We describe in detail the monte-carlo generated 3D LArTPC particle image dataset and state any assumptions or additional information that was used to train and evaluate each model. In Section IV, we present quantitative performance evaluation of different UQ models on three different settings of single particle classification, multi-particle classification, and semantic segmentation, using a variety of quantitative metrics to measure UQ fidelity.
## 2 Methods of Uncertainty Quantification in Deep Learning
Among numerous models and studies on uncertainty-quantifying neural networks [9; 10], we focus on methods designed for multi-class classification tasks that require minimal changes to popular neural network architectures. In this paper, we consider three class of UQ methods: model ensembling [11], Monte Carlo Dropout (MCD) [12], and Evidential Deep Learning (EDL) [13; 14].
### Notation
Let \(X=\{x^{(1)},x^{(2)},...,x^{(n)}\}\) and \(Y=\{y^{(1)},y^{(2)},...,y^{(N)}\}\) be data and labels in the training set, and let \(\tilde{X}=\{\tilde{x}^{(1)},\tilde{x}^{(2)},...,\tilde{x}^{(M)}\}\) and \(\tilde{Y}=\{\tilde{y}^{(1)},\tilde{y}^{(2)},...,\tilde{y}^{(M)}\}\) denote the test set. A neural network \(f_{\theta}\), parametrized by weights \(\theta\), is trained on \(D_{train}=\{(x^{(1)},y^{(1)}),...,(x^{(N)},y^{(N)})\}\), with logits
given by \(z^{*}=f_{\theta}(x^{*};X,Y)\) and labels \(\hat{y}^{*}=\operatorname*{argmax}_{c}(f_{\theta}(x^{*};X,Y)_{1},...,f_{\theta}(x ^{*};X,Y)_{c})\), for some \(x^{*}\in X^{*}\subset\tilde{X}\).
### Ensembling Methods
Model ensembling in the context of deep learning models refers to the method of training multiple instances of the same architecture with different random initialization seeds. In Naive Ensembling (NE), one trains each member of the ensemble on the same training dataset, resulting in \(N\) networks with identical architecture but different parameter values. Often, to achieve better generalization and stability, Bootstrapped Ensembling (BE) (or bagging) is preferred over naive ensembling. This is done by training each ensemble member on a dataset reorganized by sampling \(N\) examples from the full training set with replacement. If the size of the resampled dataset is equal to that of the original training set, each ensemble member is expected to see approximately 63% of the original training set. For classification, it is standard to use the most common label among the ensemble members as the final prediction, while for regression one usually computes the empirical mean. When an ensemble consists of a collection of neural networks trained with respect to a _proper scoring rule_[15] and often coupled with an optional adversarial training routine, the ensemble is termed _deep ensembles_[11].
Ensemble methods are the one of the simplest UQ methods that require no additional changes to the underlying model architecture, although the high computational cost in training \(N\) architecturally identical models and performing \(N\) forward passes for one prediction often renders them inapplicable for some memory or time consuming tasks.
### Monte Carlo Dropout
Monte-Carlo Dropout is a bayesian technique introduced in [12], where one approximates the network's posterior distribution of class predictions by collecting samples obtained from multiple forward passes of dropout regularized networks. _Dropout regularization_[16] involves random omissions of feature vector dimension during train time, which is equivalent to masking rows of weight matrices. Inclusion of dropout layers mitigates model overfitting and is empirically known to improve model accuracy [16]. A key observation of [12] is that under suitable assumptions on the bayesian neural network prior and training procedure, sampling \(N\) predictions from the BNN's posterior is equivalent to performing \(N\) stochastic forward passes with dropout layers fully activated. This way, the full posterior distribution may be approximated by monte-carlo integration of the posterior softmax probability vector \(p(\hat{y}^{*}\mid x^{*};X,Y)\):
\[p(\hat{y}^{*}\mid x^{*};X,Y)\approx\frac{1}{T}\sum_{t=1}^{T}\text{Softmax}( \mathbf{f}_{\theta_{t}}(x^{*};X,Y)), \tag{1}\]
where \(T\) denotes the number of stochastic forward passes. As with ensembling methods, the final prediction of MCDropout for classification is given by the majority vote among all stochastic forward passes. For regression, we again compute the empirical mean. As evident from the apparent similarities, MCDropout networks may also be interpreted as a form of ensemble learning [16], where each stochastic forward pass corresponds to a different realization of a trained neural network.
Implementing MCDropout requires one to modify the underlying neural network architecture to include dropout layers and configuring them to behave stochastically during test time. Often
the location of dropout layers can critically affect prediction performance, and for convolutional neural networks the decision is made via trial-and-error [17]. Also, for memory intensive tasks such as semantic segmentation, sample collection by multiple forward passes can accumulate rapidly towards high compuational cost, similar to ensembling methods.
### Evidential Deep Learning
Evidential Deep Learning (EDL) [13, 14], refers to a class of deep neural networks that exploit conjugate prior relationships to model the posterior distribution analytically. For multi-class classification, the distribution over the space of all probability vectors \(\mathbf{p}=(p_{1},...,p_{c})\) is modeled by a Dirichlet distribution with \(c\) concentration parameters \(\alpha=\alpha_{1},...,\alpha_{c}\):
\[D(\mathbf{p}\mid\alpha)=\frac{1}{B(\alpha)}\prod_{i=1}^{c}p_{i}^{\alpha_{i}-1}, \tag{2}\]
where \(\alpha_{i}\geq 1\) for all \(i\), \(B(\cdot)\) denotes the \(c\)-dimensional multinomial Beta function, and \(\mathbf{p}\) is in the \(c\)-unit simplex \(\mathcal{S}_{c}\):
\[\mathcal{S}_{c}=\{\mathbf{v}\in\mathbb{R}^{c}:\sum_{i=1}^{c}v_{i}=1\}. \tag{3}\]
In constrast to deterministic classification neural networks that minimize the cross-entropy loss by predicting the class logits, evidential classification networks predict the concentration parameters \(\alpha=(\alpha_{1},...,\alpha_{c})\). The expected value of the \(k\)-th class probability under the distribution \(D(\mathbf{p}\mid\alpha)\) is then given analytically as
\[\hat{p}_{k}=\frac{\alpha_{k}}{S},\quad S=\sum_{i=1}^{c}\alpha_{i}. \tag{4}\]
To estimate the concentration parameters, several distinct loss functions are available as training criteria. The _marginal likelihood loss_ (MLL) is given by:
\[\mathcal{L}_{MLL}(\theta)=-\log\left(\int\prod_{i=1}^{c}p_{i}^{y_{i}}D( \mathbf{p}\mid\alpha)\ d\mathbf{p}\right). \tag{5}\]
The _Bayes risk_ (posterior expectation of the risk) of the _log-likelihood_ (BR-L) formulation yields:
\[\mathcal{L}_{BR}(\theta)=\int\left[\sum_{i=1}^{c}-y\log\left(p_{i}\right) \right]D(\mathbf{p}\mid\alpha)\ d\mathbf{p}. \tag{6}\]
The _Bayes risk_ of the _Brier score_ (BR-B) may also be used as an alternative optimization objective:
\[\mathcal{L}_{BS}(\theta)=\int\left\|\mathbf{y}-\mathbf{p}\right\|_{2}^{2}\ D( \mathbf{p}\mid\alpha)\ d\mathbf{p}. \tag{7}\]
From Sensoy et. al. [13], analytic integration of the aforementioned loss functions give closed form expressions that are suited for gradient based optimization of the parameters \(\theta\).
EDL methods have the immediate advantage of requiring only one single pass to access the full posterior distribution, at a price of restricting the space of posterior functions onto the appropriate conjugate prior forms. Also, EDL methods only require one to modify the loss function and the final
layer of its deterministic baseline (if necessary), which allows flexible integration with complex, hierarchical deep neural architectures similar to the full LArTPC reconstruction chain. However, due to the strong assumptions made on the posterior analytical form, EDL methods are limited to classification and regression tasks as of now. As we later observe, EDL methods generally fall short on various UQ evaluation metrics compared to ensembling and MCDropout, depending on task specifics.
## 3 Evaluating Uncertainty Quantification Methods
### Evaluation Metrics
As stated in [11], the goal for uncertainty quantification for deep learning models is two-fold: to achieve better alignment of predicted confidence probability with their long-run empirical accuracy and to serve as mis-classification or out-of-distribution alarms that could be used for rejecting unconfident predictions. The first condition, which we term _calibration fidelity_, may be evaluated by plotting the _reliability diagrams_[18], which are constructed by binning the predicted probabilities (often termed _confidence_) into equal sized bins and plotting the bin centers in the \(x\)-axis and the empirical accuracy of the bin members in the \(y\)-axis. The closer the reliability diagram is to the diagonal, the more desirable a given classifer is, in the sense of calibration fidelity. The deviation of a given classifier from the diagonal could be summarized by computing the _adaptive calibration error_ (ACE) [19]:
\[ACE=\frac{1}{K}\frac{1}{R}\sum_{k=1}^{K}\sum_{r=1}^{R}|acc(r,k)-conf(r,k)|. \tag{1}\]
Here, \(K\) denotes the number of unique classes and \(R\) denotes the number of equal-sample bins used to plot the reliability diagram for class \(k\), given by confidence \(conf(r,k)\) and corresponding empirical accuracy \(acc(r,k)\). Although the _expected calibration error_ (ECE) [20] is more widely known, we observed in practice that static binning schemes such as ECE is suboptimal for highly skewed predictive probability distributions common to accurate models.
As calibration fidelity measurements using reliability diagrams are originally designed for binary classifiers, there have been numerous proposals for their extensions to multi-class classifiers [21; 22; 23]. We consider two relatively simple methods; the first is a standard used in Guo et. al. [21], where only the predicted probability for the most confident prediction of each sample is used to plot the reliability diagram. We refer to this mode of assessment as _max-confidence_ calibration fidelity. An alternative method is to evaluate calibration for each of the \(K\) classes separately, as in B. Zadrozny and C. Elkan [23]. We refer to this mode as _marginal_ calibration fidelity.
Another metric of uncertainty quantification measures the model's _discriminative capacity_ to mis-classified or out-of-distribution samples. In practice, uncertainty quantification models have the capacity to reject predictions based on a numerical estimate of the trustworthiness of the prediction in question. For example, in a classification setting the entropy of the predicted softmax probability distribution (_predictive entropy_) could be used as a measure of confusion, as entropy is maximized if the predictive distribution reduces to a uniform distribution over \(K\) classes. In this construction, it is desirable to have the predicted entropy distributions of correctly and incorrectly classified samples to be as separated as possible. To compute the extent of distributional separation, we may use the
first Wasserstein distance [24] between the predictive entropy distributions:
\[W_{1}(u,v)=\inf_{\pi\in\Gamma(u,v)}\int_{\mathbb{R}\times\mathbb{R}}|x-y|\;d\pi(x,y). \tag{10}\]
where \(u\) and \(v\) are two probability distributions, \(\Gamma(u,v)\) is the set of all joint probability measures in \(\mathbb{R}^{2}\). We use the Wasserstein distance with the \(L_{1}\) metric due to its simple computational implementation [24].
Sensitivity may also be measured by computing the area under the receiver operating characteristic curve (AUROC), also known as the concordance statistic (\(c\)-statistic) [25]. Using predictive entropy as the thresholding value, the ROC curve is constructed by plotting the false positive rate (incorrect predictions) in the \(x\)-axis and the true positive rate (correct predictions) in the \(y\)-axis at different threshold levels. In this setting, the AUROC is the probability that a randomly chosen correct prediction will have a lower predictive entropy than that of a randomly chosen incorrect prediction [26].
## 4 Datasets and Network Architectures
**Single Particle Classification**: We first implement and assess the different UQ models on the simpler task of single particle classification. The single particle dataset consists of
Figure 1: Sparse-CNN layer definitions for architecture reference.
Figure 2: Sparse-CNN block definitions for architecture reference.
Figure 4: Sparse-CNN architecture for semantic segmentation networks.
Figure 5: Architecture outline of multi-particle classification network. The geometric node encoder extracts hand-engineered features relevant to particle classification, such as orientation matrix and major PCA axes.
Figure 3: Sparse-CNN architecture for single particle classifiers.
1024 3D images each containing only one particle, where all voxels in the given image belong to the same particle ID. The 3D images have one feature dimension corresponding to the amount of energy deposited in a one-voxel equivalent region of the detector. We use a ResNet [27] type encoder with dropout [16] regularization, where convolution operations are substituted by sparse convolutions implemented in the _MinkowskiEngine_ library [28]. For standard deterministic models, ensembles, and MCDropout, the final prediction probabilities are given by softmax activations, whereas for evidential models the concentration parameters \(\alpha\) are computed from Softplus [29] activations. The single particle dataset contains five particle classes: photon showers (\(\gamma\)), electron showers (\(e\)), muons (\(\mu\)), pions (\(\pi\)), and protons (\(p\)).
**Semantic Segmentation** As segmentation is a classification task on individual pixels, the details of the implementation are mostly identical to those of single particle classification. We employ _Sparse-UResNet_[30] with dropout layers in the deeper half of the network as the base architecture for semantic segmentation networks and use the 768px resolution PILArNet [1] MultiPartRain (MPR) and MultiPartVertex (MPV) datasets for multiple particle datasets. The five semantic labels provided by PiLArNet consists of the following:
* Shower Fragments: connected components of electromagnetic showers that are above a set voxel count and energy deposition threshold.
* Tracks: particle trajectories that resemble stright lines, mostly originating from muon, pion, and protons.
* Michel Electrons: an electron produced by muon decay at rest.
* Delta Rays: electrons produced from muon tracks via hard scattering
* Low Energy Depositions: cloud of low energy depositions of electromagnetic showers which are not labeled as shower fragments.
**Multi Particle Reconstruction** The MPV/MPR dataset also contains particle type labels for each particle instance in a given image. For multi particle classification, we take each cluster of voxels that belong to the same particle and reduce the resulting groups of point cloud into 1-dimensional feature vectors. The node embeddings of each particle consists of geometric features such as its principal component vectors. These feature vectors are then given as input node features to a graph neural network, which performs three message passing operations to incorporate inter-particle relational information.
## 5 Results
### Training Details
The training set consists of 80k images, and the test set were separated to a 2k validation set used for model selection and a 18k test set used for selected model evaluation with high statistics. All models were trained until the validation accuracy plateaued, and the network weights that achieved the highest validation accuracy were selected for further evaluation on a separate test set. To fully account for possible variations in model accuracy and uncertainty quantification quality due to
randomized factors such as parameter initialization, the model selection procedure were repeated for five different random seeds for each model. This results in five independently trained models that share the same architecture but differing in parameter values. We used the Adam optimizer [31] with decoupled weight decay [32].
**Single Particle Classification**: Figure 6 shows the predictive entropy distribution, accuracy, and the \(W_{1}\) distance for single particle classification models. We observe that the distributional separation as measured in \(W_{1}\) is largest for the ensemble methods, while evidential model trained on the Brier score is also competitive. In general, ensemble methods achieve highest accuracy with better distributional separation compared to monte-carlo dropout and evidential models. The AUROC values in figure 8 also reflect the superior discriminative capacity of ensembling.
The calibration curves for single particle classification are shown in the top row of figure 15, and figure 7 illustrates the adaptive calibration error (ACE) values across different subsets of the test set partitioned by true particle id labels. While all UQ models with the possible exception of EDL-BR-B achieve better calibration compared to standard deterministic neural networks, ensembling methods have the least max-confidence and marginal ACE values.
**Semantic Segmentation**: For segmentation, the best distributional separation is achieved by evidential models, which are evident in Figure 9. The ensemble methods have the highest accuracy and AUROC scores as is shown in figure 10. It is interesting to note that while distributional separation measured in \(W_{1}\) is greatest for evidential models, the calibration fidelity falls short even with respect to standard deterministic models. As with single particle classification, the best calibration fidelity is realized by ensemble methods.
**Multi Particle Reconstruction**: Since contextual information which are useful in determining
Figure 6: Predictive entropy distribution, accuracy, and 1-Wasserstein distance for single particle classification.
Figure 8: Single particle ROC and percentage rejection curves.
Figure 7: Single particle classification adaptive calibration errors (ACEs) for each model and class.
Figure 9: Predictive entropy distribution, accuracy, and 1-Wasserstein distance for semantic segmentation.
a given particle's ID can only be used in a multi-particle setting, we expect a gain in accuracy from the single particle datasets. This approach leads to an overall approximate 5% increase in classification accuracy in all models. Again, ensemble methods provide the highest \(W_{1}\) distance, overall accuracy, and AUROC values (figures 12, 14) and the best calibration fidelity (figure 13).
The full reliability plots used to calculate ACE values are provided in figures 15 and 16. A tabular summary of results is available in table 1.
Figure 14: Multi particle ROC and percentage rejection curves.
Figure 12: Predictive entropy distribution, accuracy, and 1-Wasserstein distance for multi particle classification.
Figure 13: Multi particle classification adaptive calibration errors (ACEs) for each model and class.
Figure 16: Reliability plots for semantic segmentation.
Figure 15: Reliability plots for single and multi-particle classification.
## 6 Conclusion
In this paper, we have proposed a new method for constructing the \(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O( }\mathcal{O(\mathcal{O}}(\mathcal{O(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O( }(\mathcal{O(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O(\mathcal{O}}(\mathcal{O(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O( }(\mathcal{O(\mathcal{O}}( \mathcal{O(\mathcal{O}}( \mathcal{O(\mathcal{O}}( \mathcal{O(\mathcal{O( }(\mathcal{O( }O(\mathcal{O( }( \mathcal{O( }O( \)( \mathcal{O}(( \mathcal{O( ( ( ( } (\)(\mathcal{(( } ( ( ( ( ( ( ( ( ( )}()}(()((())((((()))
## 6 Conclusions
In this paper, we have presented a new method for computing
We include a brief summary of the required GPU memory and time complexity of each model in tables 3 and 2. The batch size is denoted in the Train/Test subcolumn. The time information corresponds to the CPU time it takes for the model to run one iteration of training or evaluation routine with the denoted batch size. For training, this includes the time required for both model forwarding and gradient backpropagation, while for inference we compute the sum of evaluation mode model forwarding time and other post-processing operations (for example, in MC Dropout we have a sample averaging procedure needed to obtain probability values). The memory value is computed by taking the average of the maximum required GPU memory across 5 different samples. Note that the values for deterministic and naive ensembles are identical, since naive ensembles were constructed from trained deterministic models.
## 6 Categorization of Error Types
Calibration fidelity cannot be examined in a single image basis, as calibration is a collective property that must be measured by appropriate binning of the test set predictions. However, it is possible to assess the discriminative capacity by observing samples with antipodal entropy values. With predictive entropy values in hand, the class predictions may be divided into four categories: 1) confident (low entropy) correct predictions, 2) uncertain (high entropy) correct predictions, 3) confident errors, and 4) uncertain errors. Among the four groups, confident errors are most problematic for robust design of deep learning models. Some representative examples are shown in figures 17, 18, and 19. Figure 17 is a high-entropy misclassification example, in which the network cannot confidently decide whether the set of voxels circled in red is an electron, muon, or a pion, in constrast with the confident predictions it gives for the pair of photons in and the vertex-attached proton. In figures 18 and 19, the network predicts the vertex-attached shower as an electron with high probability, while for the two \(\mu\) and \(\pi\) pair it retains some level of uncertainty. Hence, we observe that the assessment of the network on mis-identifying the shower as an electron is partly justified, as it is difficult to distinguish a photon shower attached to an interaction vertex from an electron shower.
## 7 Discussion
We evaluated three different uncertainty quantification methods for deep neural networks on the task of single particle classification, multi-particle classification, and semantic segmentation using high resolution 3D LArTPC energy deposition images. The various metrics evaluating calibration fidelity and discriminative capacity leads to a notable conclusion: simple ensembling of few independently trained neural networks generally achieve highest accuracy and best calibration of output probability values. Also, we observe that the quality of uncertainty quantification depends greatly on the type of the classifier's task, and often it is possible for Bayesian models to perform worse than deterministic networks in calibration.
Often, the choice made in hyperparameters and neural network architecture significantly affects the classifier's capacity to achieve the desired performance. It is important to note that the UQ methods presented in this paper does not include _structural_ and _hyperparameter_ of our models. Extant deep learning uncertainty quantification approaches can only account for aleatoric uncertainty and the parameter uncertainty component of epistemic uncertainty. Thus, these methods are unable
to account for structural (or model form) uncertainty, that is a component of epistemic uncertainty. While a complete description of epistemic uncertainty is often intractable in practice, it is desirable to assess how much of the variability in a deep classifier's predictions could be attributed to hyperparameter and structural diversity.
While out-of-distribution and mis-classification resilience of uncertainty quantifying neural nets may be used for rejecting unreliable predictions, obtaining calibrated probability estimates would provide further credibility in using deep learning techniques for physical sciences. Post-hoc calibration methods such as temperature scaling [21] train a calibration model (for temperature
Figure 17: Example high-entropy error from a multi-particle evidential GNN (Bayes Risk). The particles that does not originate from a vertex are omitted and are colored in dark navy.
Figure 18: Example low-entropy prediction from a multi-particle evidential GNN (Bayes Risk).
scaling, a single parameter) after training to obtain calibrated probabilities for a deterministic neural network. As post-hoc methods do not require the classifier to be re-modeled and trained from its initial state, such methods may be better suited for ensuring proper calibration of classifiers with lower computational budget. Future work will include evaluation of uncertainty quantifying neural nets and post-hoc calibration methods for a full neutrino physics signal/background classifier, which is built on top of the separate tasks of particle classification and segmentation.
## Acknowledgment
This work was supported in part by funding from Zoox, Inc. This work is supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, and Early Career Research Program under Contract DE-AC02-76SF00515.
|
2305.12639 | Accelerating Graph Neural Networks via Edge Pruning for Power Allocation
in Wireless Networks | Graph Neural Networks (GNNs) have recently emerged as a promising approach to
tackling power allocation problems in wireless networks. Since unpaired
transmitters and receivers are often spatially distant, the distance-based
threshold is proposed to reduce the computation time by excluding or including
the channel state information in GNNs. In this paper, we are the first to
introduce a neighbour-based threshold approach to GNNs to reduce the time
complexity. Furthermore, we conduct a comprehensive analysis of both
distance-based and neighbour-based thresholds and provide recommendations for
selecting the appropriate value in different communication channel scenarios.
We design the corresponding neighbour-based Graph Neural Networks (N-GNN) with
the aim of allocating transmit powers to maximise the network throughput. Our
results show that our proposed N-GNN offer significant advantages in terms of
reducing time complexity while preserving strong performance and generalisation
capacity. Besides, we show that by choosing a suitable threshold, the time
complexity is reduced from O(|V|^2) to O(|V|), where |V| is the total number of
transceiver pairs. | Lili Chen, Jingge Zhu, Jamie Evans | 2023-05-22T02:22:14Z | http://arxiv.org/abs/2305.12639v2 | # Accelerating Graph Neural Networks via Edge Pruning for Power Allocation in Wireless Networks
###### Abstract
Graph Neural Networks (GNNs) have recently emerged as a promising approach to tackling power allocation problems in wireless networks. Since unpaired transmitters and receivers are often spatially distant, the distanced-based threshold is proposed to reduce the computation time by excluding or including the channel state information in GNNs. In this paper, we are the first to introduce a neighbour-based threshold approach to GNNs to reduce the time complexity. Furthermore, we conduct a comprehensive analysis of both distance-based and neighbour-based thresholds and provide recommendations for selecting the appropriate value in different communication channel scenarios. We design the corresponding distance-based and neighbour-based Graph Neural Networks with the aim of allocating transmit powers to maximise the network throughput. Our results show that our proposed GNNs offer significant advantages in terms of reducing time complexity while preserving strong performance. Besides, we show that by choosing a suitable threshold, the time complexity is reduced from \(\mathcal{O}(|\mathcal{V}|^{2})\) to \(\mathcal{O}(|\mathcal{V}|)\), where \(|\mathcal{V}|\) is the total number of transceiver pairs.
Power allocation, Graph neural networks, Threshold, Low complexity
## I Introduction
The proliferation of fifth-generation (5G) communication technology has resulted in an escalating need for high-rate wireless access services. This trend has engendered formidable challenges to the availability of spectrum resources due to their finite nature. As an auspicious technique to alleviate the predicament of spectrum scarcity, device-to-device (D2D) communications have recently garnered considerable attention [1].
In D2D communication, two devices can communicate directly without relying on the involvement of the base station (BS) or access point (AP). The short-range D2D links facilitate high data rates for local devices, reduce power consumption in mobile devices, and relieve the traffic of BSs. However, power allocation in D2D networks is often a non-convex problem and computationally hard. Inspired by recent success in computer science, graph neural networks (GNNs) have been applied to power allocation problems in wireless networks.
In D2D networks, unpaired transmitters and receivers are often spatially distant. In light of the distance-dependent nature of interference decay, eliminating connections between distant nodes appears to be a logical approach towards reducing computational complexity, while not significantly compromising performance.
In [2], the authors proposed the distance-based threshold to reduce computation time by excluding the channel state information (CSI) in GNN when the distance is over a specific threshold. Similarly, the authors in [3] removed the edges between the transmitter to the non-paired receivers if the CSI between them is smaller than a specific threshold.
However, the threshold was chosen arbitrarily, lacking further elaboration. In response to this issue, a recent study explored the delicate balance between performance and time complexity by implementing the threshold [4]. The authors demonstrated that selecting an appropriate threshold can significantly reduce the anticipated time complexity, while concurrently preserving favorable performance outcomes. However, the results presented in [4] are restricted to a single path loss exponent, therefore, their generalisability to alternative path loss exponents is not assured. To address this limitation, the present investigation undertakes a comprehensive analysis of thresholds and provides recommendations for selecting the appropriate threshold value that corresponds to different path loss exponents. To accomplish this objective, we evaluate various thresholds in terms of their potential to catch the expected interference. Furthermore, we introduce a neighbour-based threshold that enables the reduction of time complexity from quadratic to linear with the number of transceiver pairs.
The contributions of this paper are summarised as follows:
* This research is the first to introduce a neighbour-based threshold approach to GNNs that offers significant advantages in terms of reducing time complexity while preserving the strong performance.
* This study is the first to systematically investigate appropriate threshold selection from a stochastic geometry perspective. We provide recommendations for selecting the appropriate threshold value in terms of their potential to catch the expected interference. Our findings highlight the importance of carefully considering these thresholds and the potential implications of their selection, which can vary based on the specific wireless networks in which they are applied.
* We conduct extensive experiments to verify the effectiveness of our proposed guideline for selecting the appropriate threshold. We demonstrate the neighbour-based threshold is preferable under the network density of interest. We show that by choosing a suitable neighbour-based threshold, the time complexity is reduced from \(\mathcal{O}(|\mathcal{V}|^{2})\) to \(\mathcal{O}(|\mathcal{V}|)\), where \(|\mathcal{V}|\) is the total number of transceiver pairs.
## II Preliminaries
### _System Model_
We consider a wireless communication network containing \(T\) transmitters, where they all share the same channel spectrum. We denote the index set for transmitters by \(\mathcal{T}=\{1,2,...,T\}\). For each \(t\) in \(\mathcal{T}\), we define \(\mathrm{D}(t)\) to be the index of the paired receiver. The received signal at the \(\mathrm{D}(t)\)-th receiver for any \(t\in\mathcal{T}\) is given by
\[y_{\mathrm{D}(t)}=h_{t,\mathrm{D}(t)}s_{t}+\sum_{j\in\mathcal{T}\setminus\{t \}}h_{j,\mathrm{D}(t)}s_{j}+n_{\mathrm{D}(t)},\quad t\in\mathcal{T}, \tag{1}\]
where \(h_{t,\mathrm{D}(t)}\in\mathbb{C}\) represents the communication channel between \(t\)-th transmitter and its intended \(\mathrm{D}(t)\)-th receiver, \(h_{j,\mathrm{D}(t)}\in\mathbb{C}\) represents the interference channel between \(j\)-th transmitter and \(\mathrm{D}(t)\)-th receiver. The transmitted data symbol for the \(t\)-th transmitter is represented by \(s_{t}\in\mathbb{C}\), while the additive Gaussian noise at the \(\mathrm{D}(t)\)-th receiver is modeled as \(n_{\mathrm{D}(t)}\sim\mathcal{CN}\left(0,\sigma_{\mathrm{D}(t)}^{2}\right)\). The signal-to-interference-plus-noise ratio (SINR) of the \(\mathrm{D}(t)\)-th receiver is expressed as follows:
\[\mathrm{SINR}_{\mathrm{D}(t)}=\frac{\left|h_{t,\mathrm{D}(t)}\right|^{2}p_{t} }{\sum_{j\in\mathcal{T}\setminus\{t\}}\left|h_{j,\mathrm{D}(t)}\right|^{2}p_{ j}+\sigma_{\mathrm{D}(t)}^{2}},\quad t\in\mathcal{T}, \tag{2}\]
where \(p_{t}=\mathbb{E}\left[|s_{t}|^{2}\right]\) is the power of the \(t\)-th transmitter, and we have the constraints \(0\leq p_{t}\leq P_{max}\), where \(P_{\max}\) is the maximum power constraint for transmitters. We denote \(\mathbf{p}=[p_{1},\cdots,p_{T}]\) as the power allocation vector. For a given power allocation vector \(\mathbf{p}\) and channel information \(\left\{h_{ij}\right\}_{i\in\mathcal{T},j\in\mathrm{D}(i)}\), the achievable rate \(\mathcal{R}_{\mathrm{D}(t)}\) of the \(\mathrm{D}(t)\)-th receiver is given by
\[\mathcal{R}_{\mathrm{D}(t)}(\mathbf{p})=\log_{2}\bigl{(}1+\mathrm{SINR}_{ \mathrm{D}(t)}\bigr{)},\quad t\in\mathcal{T}. \tag{3}\]
### _Optimisation Problem_
In this study, we address the problem of maximising the weighted sum-rate, a widely studied optimisation problem in the literature [2, 4, 5, 6]. The objective is to maximise the performance under maximum power constraints, which is formulated as,
\[\begin{array}{ll}\underset{\mathbf{p}}{\mathrm{maximise}}&\sum_{t}w_{t} \mathcal{R}_{\mathrm{D}(t)}(\mathbf{p}),\\ \text{subject to}&0\leq p_{t}\leq P_{\max},\quad\forall t\in\mathcal{T},\end{array} \tag{4}\]
where \(w_{t}\in[0,1]\) is the weight for the \(t\)-th transmitter.
## III Threshold-based
Graph Neural networks for power allocation
In this section, we propose a general guideline for selecting appropriate thresholds that achieves reasonably good performance while minimising complexity. Specifically, we provide recommendations in terms of their potential to catch the expected interference. We then formulate the proposed distance-based and neighbour-based graph representations and apply them to the GNNs for power allocation problems.
### _Traditional Graph Representation_
Generally, wireless networks need to be transformed into suitable graph representations in graph neural networks. The most common graph representation for power allocation in D2D networks is treating the transmitter and receiver pairs as a vertex [2]. The edge between the transceiver pairs can represent the interference between them (see Figure 1 as an example). Normally, this graph representation is a complete graph due to the existing interference between each transceiver pair. However, this representation method has several drawbacks. First, since unpaired transmitters and receivers are often spatially distant, this pair might have negligible effects on power allocation due to interference decay with distance. Besides, the complexity of GNNs depends on the total number of edges [2]. Therefore, this fully-connected graph would induce relatively high complexity compared to other graph modellings.
### _Motivation_
To address these issues, the distance-based threshold is first proposed in [2] to reduce the training workload of GNNs. The authors implemented the threshold to GNN based on the distance between the transmitter and the non-paired receivers. Similarly, the authors in [3] used the channel-based threshold to reduce the complexity by removing a neighbour of a transceiver pair if the CSI is smaller than the threshold. To explain the effect of threshold, an investigation of the trade-off between performance and complexity when thresholds are used is completed by [4]. Their simulation results showed that an appropriately chosen threshold reduces required training time by roughly \(20\%\) while preserving the performance. However, neither of them justifies their choice with theoretical analysis. To overcome this limitation, we systematically study appropriate threshold selection from a stochastic geometry perspective.
Alternatively, we also considered another threshold which is the neighbour-based threshold in GNNs to reduce the complexity while maintaining the good performance of the algorithm. The idea is instead of fixing the distance, we fix the total number of neighbours. For a specific transmitter, it only allows connecting the \(n\) nearest neighbours.
### _The Proposed Guideline for Choosing Appropriate Threshold_
Appropriate threshold selection is typically achieved by running algorithms for various densities and path-loss
Fig. 1: D2D Communication Network.
exponents. However, this approach is time-consuming due to the dynamic nature of wireless network environments. To address this challenge, we aim to find a good proxy for the appropriate threshold. We first derive a mathematical expression that models the relationship between the threshold and interference. Generally, we expect that the performance of the GNN will improve if it can capture more information about the interference in the network. However, increasing the amount of information captured also results in higher time complexity for the algorithm. To strike a balance between performance and computational efficiency, we hypothesise that if the algorithm can capture a significant proportion of the interference (e.g. \(95\%\)), then the resulting performance will be satisfactory for most practical applications.
Let us consider an infinite network, we randomly place transceiver pairs \(i\) to be the points of a Stationary Poisson Process \(\Phi\subset\mathbb{R}^{2}\) of intensity \(\lambda\)[7]. Denote \(r_{i}\) as the distance between \(i\)-th transceiver pair and the target pair. Assuming each transceiver pair is transmitting with unit power and the path loss is \(g(r)=\min\{1,(\frac{r}{d_{0}})^{-\alpha}\}\) with path loss exponent \(\alpha\) and reference distance \(d_{0}\). Here, we assume \(\alpha>2\) to ensure the expected interference is finite [8]. According to Campbell's Theorem [7], the expected interference for the target pair is
\[\begin{split} E[I]=& E[\sum_{i\in\Phi}g(r_{i})]= \lambda\int_{0}^{2\pi}\int_{0}^{\infty}g(r)rdrd\beta\\ =&\pi\lambda d_{0}^{2}+2\pi\lambda\cdot\frac{d_{0}^ {2}}{\alpha-2}=\pi\lambda d_{0}^{2}(1+\frac{2}{\alpha-2})\end{split} \tag{5}\]
where \(dr\) and \(d\beta\) represent differential elements pertaining to the radial and angular directions, respectively.
#### Iii-B1 Distance-based Threshold
Consider removing all the transceiver pairs \(i\) when the distance \(d_{i}\geq t\), where \(t\) is the distance-based threshold. Assuming the threshold \(t\geq d_{0}\), the expected interference \(E[I_{d}(t)]\) within this area with threshold \(t\) is
\[E[I_{d}(t)]=\pi\lambda d_{0}^{2}(1+2\frac{1-(\frac{t}{d_{0}})^{2-\alpha}}{ \alpha-2}) \tag{6}\]
The interference ratio \(A_{t}\) is defined as the expected interference resulting from applying a distance-based threshold \(t\) to the expected total interference. The expression is given by,
\[A_{t}=\frac{E[I_{d}(t)]}{E[I]}=\frac{\alpha-2(\frac{t}{d_{0}})^{2-\alpha}}{\alpha} \tag{7}\]
#### Iii-B2 Neighbour-based Threshold
For a specific transceiver pair, the probability density function (pdf) of the nearest neighbour within distance \(r\) is given by [8]
\[f_{R_{1}}(r)=2\lambda\pi re^{-\lambda\pi r^{2}}. \tag{8}\]
Therefore, the expected interference with only the closest neighbour is
\[\begin{split} E[I_{n}(1)]=&\int_{0}^{\infty}g(r)f_ {R_{1}}(r)dr=\int_{0}^{\infty}g(r)2\lambda\pi re^{-\lambda\pi r^{2}dr}\\ =& 1-e^{-\lambda\pi d_{0}^{2}}+d_{0}^{\pi}\int_{ \lambda\pi d_{0}^{2}}^{\infty}r^{-\alpha}e^{-t}dt\\ =& 1-e^{-\lambda\pi d_{0}^{2}}+d_{0}^{\pi}(\lambda\pi)^{ \frac{\alpha}{2}}\Gamma\Big{(}1-\frac{\alpha}{2},\lambda\pi d_{0}^{2}\Big{)} \end{split} \tag{9}\]
where \(\Gamma(s,\,x)=\int_{x}^{\infty}t^{s-1}e^{-t}dt\) is a incomplete gamma function. Similarly, the pdf for the distance to the \(n\)-th nearest neighbour (\(n\geq 1\)) is given by [8]
\[f_{R_{n}}(r)=e^{-\lambda\pi r^{2}}\cdot\frac{2\big{(}\lambda\pi r^{2}\big{)}^ {n}}{r(n-1)!} \tag{10}\]
Therefore, the expected interference with only \(n\)-th closest neighbour is given by
\[\begin{split} E[I_{n}(n)]=&\int_{0}^{\infty}g(r) \cdot f_{R_{n}}(r)dr\\ =&\int_{0}^{d_{0}}\!\!e^{-\lambda\pi r^{2}}\frac{2 \big{(}\lambda\pi r^{2}\big{)}^{n}}{r\cdot\Gamma(n)}dr+\int_{d_{0}}^{\infty}( \frac{r}{d_{0}})^{-\alpha}e^{-\lambda\pi r^{2}}\cdot\frac{2\big{(}\lambda\pi r ^{2}\big{)}^{n}}{r\cdot\Gamma(n)}dr\\ =& 1-\left(\sum_{i=0}^{n-1}\frac{(\lambda\pi r^{2})^{i} }{i!}\right)\!\!e^{-\lambda\pi d_{0}^{2}}+\frac{d_{0}^{\alpha}(\lambda\pi)^{ \frac{\alpha}{2}}}{\Gamma(n)}\Gamma\Big{(}n-\frac{\alpha}{2},\lambda\pi d_{0}^ {2}\Big{)}\end{split} \tag{11}\]
We define the interference ratio \(O_{n}\) as the expected interference with \(n\) closest neighbours to the expected total interference. The expression is given by,
\[O_{n}=\frac{\sum_{i=1}^{n}E[I_{n}(i)]}{\pi\lambda d_{0}^{2}(1+\frac{2}{\alpha -2})} \tag{12}\]
By systematically varying the path-loss exponents and intensities, we are able to determine the corresponding distances and neighbours required to achieve \(95\%\) interference, as detailed in Table I and Table II. Here, we consider the practical intensities up to 0.03 [9] and set 1 meter as the reference distance [10].
#### Iii-B3 The Proposed Guideline
To evaluate the relative strengths and weaknesses of distance-based and neighbour-based thresholds, we conducted a simulation using a Poisson point process with varying densities. The simulation was performed on a square area with dimensions \(B=10000m^{2}\), where the number of transceiver pairs was modeled as a Poisson random variable with mean \(\lambda B\)[11]. Once the number of transceiver pairs was determined, they were randomly distributed within the area. In our simulations, we selected theoretical distance-based and neighbour-based thresholds that could achieve a \(95\%\) interference ratio from Table I and II, and we recorded the variance of the total interference under both types of thresholds. The results for
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline & \(\alpha=3\) & \(\alpha=3.5\) & \(\alpha=4\) & \(\alpha=4.5\) & \(\alpha=5\) & \(\alpha=5.5\) \\ \hline \(\lambda=0.002\) & 2 & 1 & 1 & 1 & 1 & 1 \\ \hline \(\lambda=0.004\) & 3 & 2 & 1 & 1 & 1 & 1 \\ \hline \(\lambda=0.01\) & 5 & 2 & 2 & 1 & 1 & 1 \\ \hline \(\lambda=0.02\) & 9 & 3 & 2 & 2 & 2 & 2 \\ \hline \(\lambda=0.03\) & 13 & 4 & 2 & 2 & 2 & 2 \\ \hline \end{tabular}
\end{table} TABLE II: The number of nearest neighbours required to capture \(95\%\) of the total interference on average.
distance-based and neighbour-based thresholds are shown in Table III and Table IV, respectively. We observe that distance-based threshold tends to perform worse in low intensity since it has a larger variance. This is because when we fix the distance-based threshold value for every realisation, there exist some cases where none of the transceivers is within this area. Therefore, distance-based threshold fails to capture the dominant interference. The large variance also affects sum-rate performance which will be verified in Section IV. We also observed that neighbour-based thresholds perform relatively well in both low and high intensity. The neighbour-based threshold is preferable in wireless network optimisation because it guarantees that each target node will be able to connect with at least one neighbouring node. We found out the variance of the neighbour-based threshold is around 100 times smaller than the distance-based threshold in a lower intensity and higher \(\alpha\). Therefore, neighbour-based threshold is preferable at the intensity of interest.
### _Threshold-based Graph Neural Network_
In this subsection, we introduce graph representation of both distance-based and neighbour-based methods and the structure of proposed GNNs.
#### Iv-D1 Graph Representation
we define the set of vertices and edges of a graph \(G\) as \(\mathcal{V}\) and \(\mathcal{E}\), respectively. For any given vertex \(v\!\in\!\mathcal{V}\), its set of neighbours is defined as \(\mathcal{N}(v)\).
Let \(V_{v}\) and \(E_{v,u}\) represent vertex features of vertex \(v\) and edge features between vertex \(v\) and vertex \(u\), respectively.
With definitions in place, we define the vertex features of the vertices to be
\[V_{v}\!=\!\{h_{v,D(v)},\!w_{v},\!d_{v,D(v)}\} \tag{13}\]
where \(h_{v,u}\!\in\!\mathbb{C}\) and \(d_{v,u}\!\in\!\mathbb{R}\) are the channel coefficient and distance between \(v\)-th transmitter and \(u\)-th receiver, \(w_{v}\) is the weight for \(v\)-th transmitter. We define the edge features to be
\[E_{v,u}\!=\!\{h_{v,u},\!d_{v,u}\} \tag{14}\]
In the distance-based threshold method, \(v\) only connects \(u\) when the distance between them \(d_{v,u}\) is smaller than \(t(\alpha)\), where \(t(\alpha)\) is the appropriate distance-based threshold with path-loss exponent \(\alpha\). For example, as in Figure 2, the edge between vertex \(V_{2}\) to \(V_{3}\) indicates the distance between receiver \(\mathrm{D}(2)\) and transmitter \(T_{3}\) is within the threshold. In the neighbour-based threshold method, \(v\) only connects \(n\) closest neighbours, where \(n(\alpha\),\(\lambda)\) is the appropriate neighbour-based threshold with path-loss exponent \(\alpha\) and intensity \(\lambda\).
#### Iv-D2 The Structure of Graph Neural Network
An illustration of the distance-threshold-based GNN structure is shown in Figure 3. Our proposed GNNs consist of three steps: Pruning, Aggregation and Combination. First, the edges of a target vertex are pruned based on either distance-based or neighbour-based threshold. Then, the target vertex collects the information from its current neighbour. We adopt MLP for both aggregating information from a local graph-structured neighbourhood and combining its own features with the aggregated information. Besides, we use SUM operation to retain the permutation invariance property of GNN.
The updating rule of the proposed threshold-based GNN at \(l\)-th layer is given by
For distance-based threshold
\[\mathcal{N}(v)\!=\!\{u\!\in\!\mathcal{V}\!:\!d_{v,u}\!\leq\!t( \alpha)\} \tag{15}\]
For neighbour-based threshold
\[\mathcal{N}(v)\!=\!\{u\!\in\!\mathcal{V}\!:\!u\text{ is the $n(\alpha,\! \lambda)$ closest neighbours}\}\] \[\alpha_{v}^{\!(l)}\!=\!\mathrm{SUM}\Big{(}\Big{\{}f_{A}\Big{(}m_ {u}^{(l-1)},\!E_{vu}\Big{)},\!\forall u\!\in\!\mathcal{N}(v)\Big{\}}\Big{)},\] \[p_{v}^{(l)}\!=\!f_{C}\Big{(}\alpha_{v}^{(l)},\!m_{v}^{(l-1)}\Big{)},\]
where \(\alpha_{v}^{(l)}\) and \(m_{v}^{(l)}\!=\!\{V_{v},p_{v}^{(l)}\}\) represent the aggregated information from the neighbours and embedding feature vector of vertex \(v\), respectively.
The model includes two 3-layer fully connected neural networks, denoted by \(f_{A}\) and \(f_{C}\), respectively, with hidden sizes of \(\{6,16,32\}\) and \(\{36,16,1\}\), respectively. Here, \(p_{v}^{(l)}\) represents the allocated power for vertex \(v\) and we initialise the power \(p_{v}^{(0)}\!=\!P_{\max}\).
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline & \(\alpha\!=\!3\) & \(\alpha\!=\!3.5\) & \(\alpha\!=\!4\) & \(\alpha\!=\!4.5\) & \(\alpha\!=\!5\) & \(\alpha\!=\!5.5\) \\ \hline \(\lambda\!=\!0.002\) & 2.71 & 4.81 & 2.74 & 2.41 & 1.63 & 1.55 \\ \hline \(\lambda\!=\!0.004\) & 1.02 & 0.42 & 2.21 & 2.78 & 2.33 & 2.29 \\ \hline \(\lambda\!=\!0.01\) & 0.37 & 0.41 & 0.04 & 3.34 & 3.01 & 2.97 \\ \hline \(\lambda\!=\!0.02\) & 0.15 & 0.18 & 0.08 & 0.13 & 0.06 & 0.06 \\ \hline \(\lambda\!=\!0.03\) & 0.09 & 0.12 & 0.09 & 0.21 & 0.11 & 0.11 \\ \hline \end{tabular}
\end{table} TABLE IV: The variance of the interference for neighbour-based threshold.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline & \(\alpha\!=\!3\) & \(\alpha\!=\!3.5\) & \(\alpha\!=\!4\) & \(\alpha\!=\!4.5\) & \(\alpha\!=\!5\) & \(\alpha\!=\!5.5\) \\ \hline \(\lambda\!=\!0.004\) & 0.97 & 1.90 & 2.83 & 5.52 & 21.05 & 23.12 \\ \hline \(\lambda\!=\!0.01\) & 0.25 & 0.39 & 0.50 & 0.34 & 1.48 & 2.47 \\ \hline \(\lambda\!=\!0.02\) & 0.10 & 0.19 & 0.19 & 0.22 & 0.46 & 0.98 \\ \hline \(\lambda\!=\!0.03\) & 0.06 & 0.12 & 0.12 & 0.14 & 0.28 & 0.58 \\ \hline \end{tabular}
\end{table} TABLE III: The variance of the interference for distance-based threshold.
Fig. 2: D2D Communication Network with distance-based threshold.
## IV Simulations and results
### _Simulation Setup_
We consider channels with large-scale fading and Rayleigh fading which the CSI is formulated as, \(h_{v,u}=\sqrt{g_{v,u}}r_{v,u}\), where \(g_{v,u}\!=\!\min\{1,\!(\frac{d_{v,u}}{d_{0}})^{-\alpha}\}\ r_{v,u}\!\sim\! \mathcal{CN}(0,1)\), \(\alpha\) is the path-loss exponent and \(d_{0}\) is reference distance. Here, we consider different intensities within a \(100\!\times\!100\ m^{2}\) area and use 1 \(m\) as the reference distance [10]. Following the similar simulation steps in Section III-C3, we randomly placed the transmitters from \(\{20,\!40,\!100,\!200,\!300\}\) within a designated area, where the expected intensity \(\lambda\) range from \(\{0.002,\!0.004,\!0.01,\!0.02,\!0.03\}\). Here, we consider the practical intensities up to 0.03 [9]. Then each receiver was placed at a random location between \(2m\) and \(10m\) away from the corresponding transmitter. To develop an effective power allocation strategy for our wireless network, we adopted the negative sum rate as our objective loss function, which can be expressed in (16).
\[L(\theta)\!=\!-\hat{\mathbb{E}}_{\mathbf{H}}\!\left\{\!\sum_{t\in\mathcal{T}}\!w _{t}\!\log_{2}(1\!+\!\frac{\left|h_{t,\mathrm{D}(t)}\right|^{2}\!p_{t}}{\sum_ {j\in\mathcal{T}\backslash\{t\}}\left|h_{j,\mathrm{D}(t)}\right|^{2}\!p_{j}+ \sigma_{\mathrm{D}(t)}^{2}})\right\} \tag{16}\]
In practice, we generate 10000 training samples for calculating the empirical loss, and we also generate 2000 testing samples for evaluation. We assumed that only partial CSI (a subset of full CSI) is available to the algorithms.
To validate the effectiveness of our proposed guideline, we conduct the experiments among the following four algorithms:
* Full-GNN [2]: This is state-of-art GNN algorithm for power allocation problem. It should be noted that the fully connected graph is used in this approach.
* D-GNN: Our proposed GNN when the distance-based threshold is applied.
* N-GNN: Our proposed GNN when the neighbour-based threshold is applied.
* WMMSE [5]: This is the advanced optimisation-based algorithm for power allocation in wireless networks, also see [2, 6, 12] for references.
### _Performance_
The distance-based threshold and neighbour-based threshold required that achieve a \(95\%\) normalised performance are shown in Table V and VI. Compared the simulation results with Table I and II, the theoretical guideline offers significant information. For distance-based threshold, our simulations show that the D-GNN requires \(14\) unit distance to achieve good performance. Notwithstanding, as outlined in Section III-C3, there is a significant variance in performance with respect to density. Our empirical findings also indicate that the N-GNN approach, as proposed, yields satisfactory performance even when restricted to the use of only six neighbours. This outcome is primarily attributed to the fact that the six nearest neighbours are typically responsible for the majority of interference under practical densities [9]. Consequently, a limited number of edges suffice to provide the requisite information for the GNN to allocate power effectively.
We conducted a further investigation into the effectiveness of the proposed guideline by setting the threshold to the values that capture \(95\%\) interference from Table I and II for both D-GNN and N-GNN. The sum-rate performance of all the algorithms is presented in Figure 4. The results indicate that N-GNN tends to perform better than D-GNN in all scenarios due to its flexibility in using neighbour-based threshold. It is noteworthy that Full-GNN achieves better performance by using all the available information, but it comes at the cost of increased time complexity as shown in Figure 5.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline & \(\alpha\!=\!3\) & \(\alpha\!=\!3.5\) & \(\alpha\!=\!4\) & \(\alpha\!=\!4.5\) & \(\alpha\!=\!5\) & \(\alpha\!=\!5.5\) \\ \hline \(90\%\) & 7 & 7 & 7 & 7 & 7 & 7 \\ \hline \(95\%\) & 11 & 11 & 10 & 10 & 10 & 9 \\ \hline \(98\%\) & 14 & 13 & 13 & 13 & 12 & 12 \\ \hline \end{tabular}
\end{table} TABLE V: The unit distance required for GNN to achieve \(90\%\), \(95\%\) and \(98\%\) performance of WMMSE (\(d_{0}\!=\!1\)).
Fig. 3: The structure of distance-threshold-based GNN.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline & \(\alpha\!=\!3\) & \(\alpha\!=\!3.5\) & \(\alpha\!=\!4\) & \(\alpha\!=\!4.5\) & \(\alpha\!=\!5\) & \(\alpha\!=\!5.5\) \\ \hline \(\lambda\!=\!0.002\) & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline \(\lambda\!=\!0.004\) & 2 & 2 & 2 & 2 & 2 & 2 \\ \hline \(\lambda\!=\!0.01\) & 3 & 3 & 2 & 2 & 2 & 2 \\ \hline \(\lambda\!=\!0.02\) & 4 & 4 & 3 & 3 & 3 & 3 \\ \hline \(\lambda\!=\!0.03\) & 6 & 5 & 5 & 4 & 4 & 4 \\ \hline \end{tabular}
\end{table} TABLE VI: The number of nearest neighbours required for GNN to achieve \(95\%\) performance of WMMSE.
### _Time complexity_
The average running time for the algorithms under the same experimental setting is depicted in Figure 5. We observe that N-GNN has significantly lower complexity compared to Full-GNN. For instance, by selecting \(|V|\!=\!200\), N-GNN achieves a \(95\%\) optimal performance while reducing the required running time by roughly \(63\%\) compared to Full-GNN. This is due to the fact that the time complexity for each GNN layer is \(\mathcal{O}(|\mathcal{V}|+|\mathcal{E}|)\), which is dominated by \(|\mathcal{E}|\), and pruning the edges helps to reduce the complexity. To further investigate the time complexity of N-GNN and D-GNN, the running time for all algorithms when the performance achieves \(98\%\) of WMMSE is shown in Figure 6. We observe that D-GNN requires around \(40\%\) time more than N-GNN when \(T\!=\!250\). Furthermore, the observed time disparities are amplified with an increase in the number of transmitters.
In neighbour-based GNN, the total number of edges for each transceiver pair is fixed by the threshold \(n(\alpha,\lambda)\). Since the total number of edges \(|\mathcal{E}|\!=\!n(\alpha,\lambda)|\mathcal{V}|\), the time complexity for neighbour-based GNN with the threshold \(n(\alpha,\lambda)\) will be \(\mathcal{O}((1+n(\alpha,\,\lambda))|\mathcal{V}|)\). Therefore, we could reduce the time complexity from quadratic to linear with the number of transceiver pairs by introducing the neighbour-based threshold.
## V Conclusion
This research provides recommendations for selecting the appropriate threshold value in terms of their potential to catch the expected interference. We are the first to introduce a neighbour-based threshold approach to GNNs and show that by choosing a suitable neighbour-based threshold, the time complexity for GNNs is reduced from \(\mathcal{O}(|\mathcal{V}|^{2})\) to \(\mathcal{O}(|\mathcal{V}|)\), where \(|\mathcal{V}|\) is the total number of transceiver pairs.
|
2305.05423 | High-throughput Cotton Phenotyping Big Data Pipeline Lambda Architecture
Computer Vision Deep Neural Networks | In this study, we propose a big data pipeline for cotton bloom detection
using a Lambda architecture, which enables real-time and batch processing of
data. Our proposed approach leverages Azure resources such as Data Factory,
Event Grids, Rest APIs, and Databricks. This work is the first to develop and
demonstrate the implementation of such a pipeline for plant phenotyping through
Azure's cloud computing service. The proposed pipeline consists of data
preprocessing, object detection using a YOLOv5 neural network model trained
through Azure AutoML, and visualization of object detection bounding boxes on
output images. The trained model achieves a mean Average Precision (mAP) score
of 0.96, demonstrating its high performance for cotton bloom classification. We
evaluate our Lambda architecture pipeline using 9000 images yielding an
optimized runtime of 34 minutes. The results illustrate the scalability of the
proposed pipeline as a solution for deep learning object detection, with the
potential for further expansion through additional Azure processing cores. This
work advances the scientific research field by providing a new method for
cotton bloom detection on a large dataset and demonstrates the potential of
utilizing cloud computing resources, specifically Azure, for efficient and
accurate big data processing in precision agriculture. | Amanda Issac, Alireza Ebrahimi, Javad Mohammadpour Velni, Glen Rains | 2023-05-09T13:15:19Z | http://arxiv.org/abs/2305.05423v1 | Development and Deployment of a Big Data Pipeline for Field-based High-throughput Cotton Phenotyping Data
###### Abstract
In this study, we propose a big data pipeline for cotton bloom detection using a Lambda architecture, which enables real-time and batch processing of data. Our proposed approach leverages Azure resources such as Data Factory, Event Grids, Rest APIs, and Databrics. This work is the first to develop and demonstrate the implementation of such a pipeline for plant phenotyping through Azure's cloud computing service. The proposed pipeline consists of data preprocessing, object detection using a YOLOv5 neural network model trained through Azure AutoML, and visualization of object detection bounding boxes on output images. The trained model achieves a mean Average Precision (mAP) score of 0.96, demonstrating its high performance for cotton bloom classification. We evaluate our Lambda architecture pipeline using 9,000 images yielding an optimized runtime of 34 minutes. The results illustrate the scalability of the proposed pipeline as a solution for deep learning object detection, with the potential for further expansion through additional Azure processing cores. This work advances the scientific research field by providing a new method for cotton bloom detection on a large dataset and demonstrates the potential of utilizing cloud computing resources, specifically Azure, for efficient and accurate big data processing in precision agriculture.
## 1 Introduction
The demand for sustainable agriculture has put significant pressure on the agriculture sector due to the rapid growth of the global population. Precision farming techniques enabled by Computer Vision (CV) and Machine Learning (ML) have emerged as promising solutions where crop health, soil properties, and yield can be monitored and lead to efficient decision-making for agriculture sustainability. Data would be gathered through heterogeneous sensors and devices across the field like moisture sensors and cameras on the rovers. However, the huge number of objects in farms connected to the Internet leads to the production of an immense volume of unstructured and structured data that must be stored, processed, and made available in a continuous and easy-to-analyze manner (Gilbertson and Van Niekerk, 2017). Such acquired data possesses the characteristics of high volume, value, variety, velocity, and veracity, which are all characteristics of big data. In order to leverage the data for informed decisions, a big data pipeline would be needed.
One area of agriculture that faces particular challenges with regard to yield prediction is cotton production. The operation of cotton production is faced with numerous challenges, a major one being the timely harvesting of high-quality cotton fiber. Delayed harvesting can lead to the degradation of cotton fiber quality due to the exposure to unfavorable environmental conditions. Therefore, to avoid degradation, it is vital for harvesting cotton when at least 60% to 75% are fully opened, but also prior to the 50-day benchmark when bolls begin to degrade in quality. (UGA, 2019). In addition, cotton harvesting is costly, as the machines used for their processing can weigh over 33 tons and can also cause soil compaction, hence reducing land productivity (Antille et al., 2016). Finally, a lack of skilled labor and external factors such as climate change, decreasing arable land, and shirking water resources hinder sustainable agricultural production (FAO, 2009). In this context, heterogeneous and large-volume data is collected using various static and moving sensors. Therefore, it is imperative to develop a platform that can handle real-time streams and manage large datasets for High-Throughput Phenotyping (HTP) applications. However, most conventional storage frameworks adopted in previous studies support only batch query processing and on-premise servers for data processing. Rather than implementing on-premise processing, the adoption of cloud computing can help prevent over- or under-provisioning of computing resources, reducing costly waste in infrastructure for farmers as shown in (Kiran et al., 2015) who introduced a cost-optimized architecture for data processing through AWS cloud computing resources. Therefore, leveraging cloud computing could be a viable option for developing an efficient and scalable platform for HTP applications.
In this paper, we aim to implement batch and real-time processing using cloud computing which can help prevent over- or under-provisioning of computing resources. For that, we propose a big data pipeline with a Lambda architecture through Azure which allows for the cohesive existence of both batch and real-time data processing at a large scale. This two-layer architecture allows for flexible scaling, automated high availability, and agility as it reacts in real time to changing needs and market scenarios. For testing this pipeline, we train and integrate a YOLOv5 model to detect cotton bolls using the gathered dataset.
### _Lambda Architecture_
Lambda architecture, first proposed in (Marz and Warren, 2013), is a data processing architecture that addresses the problem of handling both batch and real-time data processing by using a combination of a batch layer and a speed layer. In the context of agriculture, various research studies have implemented Lambda architecture pipelines to process and analyze large amounts of sensor data, such as weather data and crop yields, in order to improve crop forecasting and precision agriculture. Very recent work (Roukha et al., 2020; Ouafiq et al., 2022) have demonstrated the feasibility of using a Lambda architecture framework in smart farming.
### _Cloud Computing_
Previous research on big data pipelines has employed one- premise servers for data processing, while the use of cloud computing can substantially reduce the cost for farmers. Cloud providers, such as Microsoft Azure, offer various data centers to ensure availability and provide better security compared to on-premise servers. We propose the adoption of Microsoft Azure Big Data resources to implement a Lambda architecture pipeline in the agriculture industry. Azure Big Data Pipeline is a cloud-based processing service offered by Microsoft that can be utilized for analyzing, processing, and implementing predictive analysis and machine learning-based decisions for agricultural operations.
#### 1.2.1 Azure Data Factory
Azure Data Factory (ADF) allows for the creation of end-to-end complex ETL data workflows while ensuring security requirements. This environment enables the creation and scheduling of data-driven workspace and the ingestion of data from various data stores. It can integrate additional computing services such as HDInsight, Hadoop, Spark, and Azure Machine Learning. ADF is a serverless service, meaning that billing is based on the duration of data movement and the number of activities executed. The service allows for cloud-scale processing, enabling the addition of nodes to handle data in parallel at scales ranging from terabytes to petabytes. Moreover, one common challenge with cloud applications is the need for secure authentication. ADF addresses this issue by supporting Azure Key Vault, a service that stores security credentials (Rawar and Narain, 2018). Overall, the use of ADF in our pipeline allows for efficient and secure data processing at scale.
### _Related Work_
Previous studies have utilized traditional pixel-based CV methods, such as OpenCV, to identify cotton bills based on their white pixel coloring (Kadeghe et al., 2018). Another study has explored the use of YOLOv4 in order to detect cotton blooms and bills (Thesma et al., 2022). Moreover, the integration of big data architecture has been suggested in previous research to optimize agricultural operations (Wolfert et al., 2017). Parallel studies have explored the use of Lambda architecture pipelines as a viable approach to process and analyze large amounts of sensor data, such as weather data and crop yield, in order to improve forecasting for specific crops. For instance, Roukh presents a cloud-based solution, named WALLeSMART, aimed at mitigating the big data challenges facing smart farming operations (Roukha et al., 2020). The proposed system employs a server-based Lambda architecture on the data collected from 30 dairy farms and 45 weather stations. Similarly, Quafig integrates a big data pipeline inspired by Lambda architecture for smart farming for the purposes of predicting drought status, crop distributions, and machine breakdowns (Ouafiq et al., 2022). The study suggests the benefits of flexibility and agility when utilizing a big data architecture. Furthermore, cloud-based solutions have become increasingly popular in agriculture due to their scalability and cost-effectiveness. Another study employs big data in the cloud related to weather (climate) and yield data (Chen et al., 2014).
### _Summary of Contributions and Organization of the Paper_
This paper focuses on the use of Microsoft Azure resources to implement and validate a Lambda architecture High-throughput Phenotyping Big Data pipeline for real-time and batch cotton bloom detection, counting, and visualization. We develop data reduction and processing to transfer useful data and separately train a YOLOv5 object detection model and integrate it into our big data pipeline. The pipeline was thoroughly tested and demonstrated through the analysis of a set of 9000 images.
Despite existing research work on the use of Lambda architecture and its benefits, there is still a lack of studies that elaborate on the development process and tools to construct this architecture. Moreover, there has been limited research on the application of Lambda architecture utilizing cloud computing resources, as most are server based. **To the best of our knowledge, there is no previous study that elaborates on the implementation of a big data Lambda architecture pipeline utilizing cloud computing resources, specifically Azure, while integrating advanced machine learning models for plant phenotyping applications**. While big data analytics and cloud computing have become increasingly popular in precision agriculture, the integration of these technologies with Lambda architecture for plant phenotyping (our case, cotton) remains an open research area. Our approach demonstrates the efficacy of utilizing cloud-based resources for the efficient and accurate analysis of large-scale agricultural datasets.
This paper makes several contributions to the research field, which are listed as follows:
1. Introducing a Lambda architecture pipeline that takes into account batch and real-time processing, providing an efficient and scalable solution for data analysis.
2. Utilizing cloud computing resources, specifically Microsoft Azure, to improve the performance and reliability of the proposed pipeline.
3. Demonstrating the actual implementation tools and processes used to build the proposed pipeline, enabling other researchers to replicate and build upon our work.
4. Integrating a big data pipeline for cotton plant phenotyping, which enables the efficient analysis of large volumes of data and provides new insights into the growth and development of cotton plants.
5. Contributing a new cotton field dataset to the research community, which is currently limited, enabling other researchers to validate and build upon our findings.
The remainder of the paper is organized as follows: Section 2 describes data retrieval; Section 3 summarizes the implemented Lambda architecture pipeline; Section 4 provides a summary of offline AI-based object detection model training; Section 5 discusses fine-tuning methods to optimize the continuous pipeline run-time and showcases final results, and lastly in Section 6, we discuss areas for future work and concluding remarks.
## 2 Dataset
In this study, we employed our own cotton field dataset to evaluate the proposed pipeline for phenotyping analysis. The cotton field dataset will be further elaborated in subsequent sections, including data collection procedures and data preprocessing steps.
### Cotton Research Farm
The cotton data was collected using a stereo camera that was installed on an autonomous ground vehicle deployed in a research farm at the University of Georgia's Tifton campus in Tifton, GA. Figure 1 illustrates an aerial view of the farm. The treatments described in Figure 1 are 4-row wide, but we collected data on the inner 2-rows for post-analysis as discussed later.
### Cotton Field Data Collection
In our data collection efforts, we employed a rover developed by West Texas Lee Corp. (Lubbock, Texas). As described in (Fue et al., 2020), this rover is a four-wheel hydrostatic machine with a length of 340 cm, front and rear axles 91 cm from the center, and a ground clearance of 91 cm. It was powered by a Predator 3500 Inverter generator and equipped with the Nvidia Jetson Xavier for remote control and vision and navigation systems. With a top speed of approximately 2 kilometers per hour, the rover was able to efficiently traverse the study area. To power its electronics, the rover utilized two 12-Volt car batteries, as well as a ZED RGB stereo camera.
The ZED stereo camera, with left and right sensors 120 cm apart and mounted 220 cm above the ground facing downward (Fue et al., 2020) was chosen for its ability to perform effectively in outdoor environments and provide real-time depth data. It captured 4-5 frames per second and recorded a video stream of each 4-row treatment from June 2021 to October 2021, 2-3 days per week, as a ROS bag file.
### Dataset Creation
In this study, a camera equipped with two lenses was utilized to capture images of cotton plants. The camera captured both left and right views of the plants, with a total of 765 image frames extracted from sixteen 4-row treatments on each data collection day between July 14, 2021 and August 6, 2021, where blooms began to appear. These frames were labeled in ascending numerical order to ensure proper correspondence with the video stream and prevent any overlapping. The 765 image frames were subsequently divided into separate sets for the left and right lens views, resulting in a total of 1,530 frames. An example of the image frames captured by the left and right lens can be seen in Figure 3.
Previous research has shown that the small proportion of blooms relative to the background in cotton field images can make it difficult for neural network models to accurately detect the blooms (Thesma et al., 2022). To address this issue, we pre-processed the images by dividing them into five equal slices. The treatments described in Figure 1 are 4-row wide, but we collected data on the inner 2-rows for analysis when slicing. An example of the resulting images is shown in Figure 4 used as input for the subsequent analysis pipeline.
Figure 1: Aerial view of our cotton farm in Tifton, GA displaying 40 rows of cotton plants, treatments of two planting populations (2 and 4 seeds per foot), HD (Hildrop), and single planted cotton seed. Two-row spacing of 35 inches and 40 inches were also used as treatment. Each treatment was 4 rows wide and 30 feet long. There were three repetitions per treatment.
Figure 2: Front view of the rover with the robotic arm, vacuum, and sensors mounted on the rover (see (Fue et al., 2020) for details) that was used to collect video streams of cotton plants in Tifton, GA.
We selected a dataset consisting of sliced images from 10 specific days in 2021: July 8, July 14, July 16, July 19, July 23, July 26, July 29, August 4, August 6, and September 9. This resulted in a total of 9,018 images with 3 color channels (RGB) with dimensions of 530 \(\times\) 144 for testing batch processing and creating the offline object detection model. The dataset in this study comprised diverse cotton plant data, locations, and treatments, as the video streams were collected from various rows on different days.
## 3 Development of Lambda Architecture Pipeline
In this work, we propose a Lambda architecture to enable real-time analytics through a distributed storage framework, which traditionally is only capable of batch processing. The proposed architecture consists of three main layers: batch, speed, and serving. The batch layer is responsible for processing large amounts of historical data on a schedule, while the speed layer handles real-time streams of data. The serving layer serves the processed data to clients, applications, or users. This approach allows for the efficient handling of both historical and real-time data, enabling a wide range of analytical capabilities. We illustrate the Lambda architecture using Azure resources in Figure 5. In order to achieve real-time ingestion, we utilize the Azure Data Factory's event-based trigger which sends an event when an image is uploaded to the storage account. This event is handled by Azure's Event Grid for real-time streams. In comparison, batch ingestion is triggered by a scheduled event. Once ingested into the Azure Data Factory, the pipeline connects to Databricks for preprocessing of the image data. The processed data is then forwarded to a deployed AI object detection model, which is running on a Kubernetes cluster, to retrieve the designated bounding box coordinates for the image. Finally, Databricks draws the bounding boxes and outputs the image. The development process will further be elaborated.
To initiate our analysis, we established an Azure Data Factory workspace. The Azure Data Factory portal allows monitoring the pipelines' status in real time. In order to use the Data Factory, we had to create a resource group, a container for holding related resources for our Azure solution. For this work, we opted to ingest binary unstructured data from Azure Blob storage into Azure Data Lake. This allowed us to efficiently process and store large volumes of data for subsequent analysis.
Azure Blob storage is a highly scalable unstructured data object storage service. To use Blob storage and create an Azure Data Lake, we first had to initialize a storage account. Azure Storage is a Microsoft-managed service that provides cloud storage and provides REST API for CRUD operations. For this project, we configured the storage account to use locally redundant storage (LRS) for data replication, as it is the least expensive option. We also set the blob access tier to 'hot' to optimize for frequently accessed and updated data. The storage account's data protection, advanced, and tags settings were left as their default values. Overall, the use of Azure Blob storage and the creation of an Azure Data Lake allowed us to efficiently store and process large volumes of unstructured data for our analysis.
Microsoft Azure Data Lake is a highly scalable data store for unstructured, semi-structured, and structured data (Rawar and Narain, 2018). It is compatible with Azure services and a variety of additional tools, making it capable of performing data transformation and handling large volumes of data for analytics tasks. To separate the stream and batch processing in our pipeline, we created two separate blob containers labeled batch and stream. Files ingested into the 'batch' folder are processed by a scheduled trigger designed for batch processing, while files ingested into the'stream' folder trigger real-time processing. This allows us to efficiently handle both historical and real-time data in our analysis.
Figure 4: Example of cotton field pipeline input image after preprocessing prior to data ingestion into the pipeline.
Figure 5: Illustration of the proposed pipeline utilizing Azure resources
Figure 3: Example of cotton field dataset image after image extraction from original bag files.
### Speed Layer
The stream layer of the Lambda architecture is designed for real-time analysis of incoming data streams. It is generally not used for training machine learning models, but rather for applying pre-trained models to classify or predict outcomes for the incoming data. This allows to provide real-time insights which are crucial when timely action is required depending on the data. For example, real-time analysis of cotton bloom location and density can enable farmers to take immediate action. Another benefit of the stream layer is its ability to handle high-volume data streams with low latency, which can be a challenge for traditional batch processing systems that may suffer from delays in the availability of insights.
#### 3.1.1 Ingestation
To enable real-time processing in our pipeline, we implemented a file storage trigger in the stream layer. This trigger initiates the pipeline in real time whenever a new image is added to the blob storage. This approach allows us to automate the data processing and analysis pipelines, hence reducing the need for manual intervention. Additionally, the file storage trigger is compatible with other services such as Azure IoT Hub, enabling us to process data ingested from IoT devices for scalability. This approach allows to efficiently and effectively analyze data as it is generated in near real time.
The creation of a real-time trigger in Azure Data Factory also generates an event grid in Azure. Event Grid is a messaging service in Azure that enables the creation of event-driven architectures. It can be used to trigger actions such as running a pipeline. In our case, the event grid listens for events in the input source (blob storage) and, upon detecting a new event, sends a message to the Data Factory service to trigger the execution of the pipeline. This allows for the automation of the pipeline process. For the transfer of data from blob storage into the data lake, we must create a connection between the Data Factory and the Data Lake. We used a Copy Activity in a Data Factory pipeline to copy data from a Data Lake store to a different store or data sink.
In our pipeline, we use two separate folders as input sources, each with its own trigger (batch and stream). To facilitate this configuration, we parameterized the input file name to accommodate the separate cases of the stream and batch layers. By adopting the parameterization of the data folder input as dynamic, we were able to alter the folder used as the input source without modifying the pipeline itself. This approach allows us to flexibly configure the input sources for our pipeline without the need for additional maintenance or modification.
### Batch Layer
The batch layer of our pipeline serves as the primary repository for the master dataset and allows us to view a batch view of the data prior to computation. The layer plays a crucial role in managing and organizing the dataset, enabling efficient analysis and processing. We can divide this batch data into smaller batches to train machine learning models on large datasets quicker, independently, parallel, and through fewer computational resources. It also helps with the scalability of a machine learning system as the system will be able to handle larger datasets, optimize the training process, and in improving the performance of the resulting model.
#### 3.2.1 Ingestation
For our experiments, we selected a dataset consisting of sliced images from 10 specific days in 2021, which resulted in a total of over 9,000 images. The dataset is stored in Azure Blob Storage, a scalable cloud-based object storage service that is capable of storing and serving large amounts of data. This scalability, compatible with terabytes of data, makes it well-suited for use in data-intensive applications such as ours. To accommodate larger volumes of data, Blob Storage is engineered to scale horizontally by automatically distributing data across multiple storage nodes. This allows it to handle increases in data volume and access requests while eliminating additional manual provisioning or configuration.
In our pipeline, we integrated a batch trigger in addition to the stream layer trigger. This trigger is of the batch type, allowing us to specify a predetermined schedule for execution. The schedule can be fixed, such as running every day at a specific time, or dynamic through a CRON expression, which is a job scheduler used within Azure. For the purposes of our experimentation, the trigger is calibrated to run every 3 minutes. However, the flexibility of the batch trigger schedule allows for the customization of the frequency of execution to meet our specific data collection and processing needs. For example, the trigger can be executed on a weekly or hourly basis when collecting data on-site. The use of a batch trigger in Azure Data Factory allows us to scalably process large volumes of data. We can ingest data into the batch layer at a rate that meets our specific needs, and schedule the trigger to execute at appropriate intervals to ensure that the data is processed and analyzed in a timely manner. The ability to adjust the schedule of the batch trigger allows us to fine-tune the performance of our pipeline and ensure that it is able to handle the volume and velocity of our data effectively.
Figure 6: Screenshot of the parameterization process for the stream and batch triggers to automate the pipeline for continuity
#### 3.2.2 Azure Data Factory Connection
The batch layer follows the same process as the speed layer for the Azure Data Factory connection. If an image is ingested into the batch folder, the batch trigger sends the parameter of the batch which will be used in the remainder of the pipeline for organizing data.
### Pre-process/Analyze
In the analysis of high-volume data, pre-processing is a vital step. Raw data from devices may contain inconsistencies and noise which can depreciate the quality of results and decision-making insights. These issues are addressed through the cleansing, normalization, and reduction of data. Furthermore, the pre-processing step of images integrates various techniques such as noise reduction, image enhancement, and feature extraction. These methods assist with streamlining decision-making and interpretation. We decide to incorporate image compression into our pipeline as it can significantly reduce the size of images. This down-sizes storage size and costs of processing large volumes of data. By integrating image compression methods to eliminate image data redundancy, it is possible to represent an image through fewer bits, resulting in a smaller file size. However, there are trade-offs in terms of image quality and compression ratios, thus it is imperative to select an image compression algorithm that does not deteriorate the quality when compressing. These steps are crucial in the context of big data pipelines, where storage space is often a limiting factor.
#### 3.3.1 Databricks Connection to Data Lake
To facilitate pre-processing, we incorporate Databricks, a cloud-based platform that integrates Apache Spark, a powerful open-source data processing engine (Zaharia et al., 2012). Apache Spark is optimized to handle substantial amounts of data quickly and efficiently, making it ideal for real-time data processing applications. It boasts the capability for in-memory processing, rendering it significantly more efficient than disk-based systems, especially when working with vast amounts of data, resulting in reduced computation time. Moreover, Apache Spark supports parallel processing, permitting it to divide data into smaller chunks and process them simultaneously to enhance performance even further if needed.
In the realm of pre-processing tasks, a popular alternative to Databricks is the open-source big data processing framework, Hadoop. Hadoop utilizes the MapReduce programming model, which has been shown to be challenging to work with in comparison to the Spark engine utilized by Databricks (Gonzalez et al., 2014). Furthermore, Hadoop requires significant configuration and maintenance efforts to set up and run properly, whereas Databricks offers a user-friendly interface and requires less infrastructure (Zaharia et al., 2012). In addition, Databricks provides a range of additional tools and features, such as integration with data storage platforms like Amazon S3 and Azure Blob Storage, as well as the ability for data scientists and analysts to collaborate through notebooks and dashboards (Databricks, 2021), making it a more convenient platform for handling big data.
For our experiments, we configured our Databricks cluster to use Databricks Runtime version 11.3 LTS. The worker and driver type is Standard DS3 v2 which contains 14 GB memory and 4 cores. We have the range of workers to be between 2 to 8 and enabled auto scaling, where the cluster configures the appropriate number of workers based on the load. Once the data is ingested into the Data Lake, we decide to compress the image by storing the image into a jpeg file with a 30% quality. This pre-processing stage is flexible and scalable where we can also implement other pre-processing and data transformation techniques such as image slicing. Furthermore, we checked the image dimensions to be a valid input for our model.
The next step is to configure the Databricks linked service connection. The Databricks linked service connection in Azure is a way to connect to a Databricks workspace from Azure. It allows users to easily access and integrate data stored in their Databricks workspace with Azure Data Factory. When configuring the Databricks linked service, we enter the Databricks workspace URL and authentication access token. We first selected the method of having a new job cluster created anytime there was an ingestation trigger.
In order to improve the efficiency of the data processing pipeline, we decided to switch from creating a new job cluster for each ingestion trigger to using existing interactive clusters. This approach reduces the time required for the pipeline to start processing, as the interactive cluster is already active when new data is ingested. This process saves the average 3 minutes of restarti
Figure 7: Screenshot of Azure Data factory when setting up the Databricks linked service connection to ADF. The credentials required are as follows: Databricks workspace URL, Authentication type, Access token. Initially, we decided to create a new job cluster; however, based on the results, we shifted to existing interactive cluster; hence, we input the existing cluster ID.
single trigger. However, when a file is first uploaded, there exists a delay while the inactive interactive cluster is first started. To minimize this delay, we calibrate the interactive cluster to terminate if no activity has been detected for a period of 20 minutes. This configuration can be easily adjusted to meet the needs of different use cases. This results in the first image ingestion taking 3 minutes to begin the cluster, however, subsequent image ingesions demonstrated a significant reduction in connection time to the cluster, with a duration of fewer than 10 seconds.
To enable Databricks to access the Azure Data Factory, we mounted the Data Lake Storage Gen2 (ADLS Gen2) file system to the Databricks workspace. This allows us to use standard file system operations to read and write files in the ADLS Gen2 file system as if it were a local file system. Mounting the ADLS Gen2 file system to Databricks enables us to access data stored in ADLS Gen2 from Databricks notebooks and jobs, and facilitates integration between Databricks and other tools and systems that use ADLS Gen2 as a storage backend. Furthermore, the parameterization of the input folder (batch vs. stream folder) allows the databricks notebook to use this Dynamic input to make changes to the correct data lake folder.
### AI Model/APIs
In this section, we describe the process of deploying the trained AI model.
#### 3.4.1 Deployment with Kubernetes
To deploy a trained Object Detection model in the pipeline, we utilized Azure Kubernetes Service (AKS) (Corporation, 2021). Microsoft Azure's AKS simplifies the process of deploying and scaling containerized applications on the cloud platform through its managed Kubernetes service. By leveraging the benefits of the open-source Kubernetes container orchestration platform, AKS creates a consistent and predictable environment for managing these applications. With features like automatic bin packing, load balancing, and secret and configuration management, AKS enhances the management of containerized applications. The service achieves this by creating and managing clusters of virtual machines that run these applications, making the deployment and scaling process easier and more efficient.
Kubernetes is highly scalable, and its platform allows for management of applications across multiple nodes in a cluster, making it a versatile solution for managing containerized applications in the cloud (Corporation, 2021). It provides a consistent and predictable environment for deploying and scaling containerized applications. Secret and configuration management provides secure, encrypted storage for sensitive data such as passwords and API keys, improving application security. Kubernetes also includes several features that enhance the management of containerized applications, including automatic bin packing, load balancing, and secret and configuration management. Automatic bin packing allows Kubernetes to schedule containers to run on the most appropriate nodes in a cluster, maximizing cost efficiency. Load balancing distributes incoming traffic across multiple replicas of an application to handle high traffic volumes.
This cluster uses a Standard D3 v2 virtual machine which has 4 cores, 14 GB RAM, and 200GB storage. We retrieve the score Python script from the best AutoML YOLOv5 run, and use it to deploy the model as an AKS web service. In order to assess the performance of our machine learning model, we utilized a Python script. This script contains code for loading the trained model, reading in data, making predictions using the model, and calculating various performance metrics such as accuracy and precision. It also includes provisions for saving the predictions made by the model and the calculated performance metrics to a file or database for further analysis. By running our script on a separate dataset, known as the test dataset, we were able to obtain an unbiased estimate of the model's performance and assess its ability to generalize to new data.
To enhance the capability of our object detection model in handling elevated workloads, it was deployed with autoscaling enabled. This allows for dynamic adjustment of computing resources, such as CPU Processing Nodes and memory, in response to incoming requests. The initial configuration was set to 1 CPU core and 7 GB of memory. To secure the model, an authentication key system was implemented, requiring the provision of a unique key with each request. This key system ensures only authorized access to the model. Subsequent sections of this research will elaborate on the training process of the object detection model and its integration into the workflow.
#### 3.4.2 Azure Data Factory Connection
In order to optimize the efficiency of our pipeline, we made the decision to include the AKS connection credentials within the initial Databricks notebook where the data is pre-processed. This approach was chosen as an alternative to utilizing Azure's ADF web service option for REST API connection in the Azure Data Factory, which would have required the creation of another Databricks notebook to draw the bounding boxes from the output of bounding box coordinates. By integrating the AKS connection credentials directly into the primary notebook, we were able to streamline the process while eliminating the need for an additional Databricks compute cluster and cluster connection time. This avoided the added overhead of creating an additional notebook in ADF, which would have slowed down the pipeline. Overall, we have one Databricks notebook which will conduct the pre-processing and post-processing of data. Figure \(8\) illustrates the tasks of Databricks in the pipeline.
### Output
We developed an object detection model that is capable of identifying and counting cotton blooms in images. When the model is run on an input image, it returns the bounding box coordinates of any cotton blooms that it detects. To visualize the results of the model, we retrieve these bounding box coordinates and use them to create visual bounding boxes over the input image. This output image, which shows the detected cotton blooms overlaid on the original image, is
then stored in a blob storage account. By using a blob storage REST API, we can easily send this output image to any other device for further processing or analysis. This approach allows us to scale the output of the model to meet needs.
## 4 Offline YOLOv5 Model Training
Object detection is a key task in computer vision, which involves identifying and locating objects of interest in images or video streams. One popular object detection model is YOLO (You Only Look Once), which was first introduced by Redmon in 2015 (Redmon et al., 2015). Since then, the YOLO model has undergone several revisions, and one key difference between YOLOv5 and its predecessor, YOLOv4, is the training process. While previous versions of YOLO, including YOLOv4, were trained using the Darknet framework, YOLOv5 utilizes the TensorFlow backend. This allows YOLOv5 to benefit from the advanced optimization and acceleration techniques provided by TensorFlow, which can improve the model's performance and speed. YOLOv5 also introduces several other improvements and new features compared to YOLOv4. These include more efficient network architecture and support for a wider range of input sizes (Bochkovskiy et al., 2020).
### Data Labeling
We utilized AutoML and Azure Machine Learning Studio to train a YOLOv5 model for cotton bloom detection. AutoML automates the process of selecting and training the most suitable machine learning model for a given dataset. It allows users to easily train, evaluate, and deploy machine learning models without the need for extensive programming knowledge or machine learning expertise (Wachs and Kalyansundaram, 2021). To train the YOLOv5 model using AutoML, we first set up a connection between our data lake (which contained the images used for training) and Azure Machine Learning Studio. Azure Machine Learning Studio is a cloud-based platform that provides tools for developing, deploying, and managing machine learning models in Azure (Murphy, 2012). Once the connection was established, we were able to use AutoML and Azure Machine Learning Studio to train and evaluate the YOLOv5 model on our dataset. The platform provided a range of tools and resources for optimizing the model's performance, including the ability to tune hyperparameters, apply data augmentation techniques, and evaluate the model's performance using a variety of metrics which will be further discussed.
After creating the Machine Learning studio workspace, we need to create a Datastore which connects to our Data Lake Storage Container. From the Datastore, we created a Data Asset. Data stores and data assets are resources in Azure Machine Learning studio that allows us to store and access data for machine learning experiments. We created the Data Asset through 1300 images saved in a Data Lake that was compressed prior. In our case, we decided to reduce the quality of the images by compressing prior to training the model. This way, our model would provide better accuracy when implementing the full pipeline which compresses the images prior to being send into the AI Model. Using the Azure ML Studio Labeler tool, we annotated 1300 images through bounding boxes that can be used to identify the location and size of the cotton blooms in the image. The AutoML labeler tool is part of the Azure Machine Learning platform. After the annotations were complete, we exported them into AzureML Dataset format. Figure 9 is a screenshot of an example of annotating one image through Azure's Image Labeler tool.
### Model Hyperparameters and Training
In this work, the model utilized 80 percent of the dataset for training, and 20 percent for validation purposes. Furthermore, the YOLOv5 model was trained using a learning rate of 0.01, a model size of large which contains 46.5 million training parameters, and a total of 70 epochs. However, the training process was terminated early when the mean average precision (mAP) metrics stopped improving. This resulted in the training process stopping early at 30 epochs in our experiment. The number of epochs used for training is important, as it determines the number of times that the model sees the training data and can influence the model's performance. Figure 10 shows results from our hyperparameter tunings.
One key aspect of the training process was the use of the Intersection over Union (IOU) threshold, which is a measure of the overlap between the predicted bounding boxes and the ground truth bounding boxes (see Figure 11). The IOU threshold was set to 0.55 for both precision and recall, which means that a predicted bounding box was considered correct if the overlap with the ground truth bounding box was greater than or equal to 0.55. The use of the IOU threshold is important because it allows the model to be evaluated
Figure 8: The figure illustrates the tasks within Databrics notebook: compression (pre-processing), connection to AI Model, and creation of output with results (post-processing)
Figure 9: Example of cotton bloom bounding box annotations for one cotton field sliced image.
using a standard metric to compare the performance of different models.
In addition to the IOU threshold, the training process also involved setting the batch size to 10, where the model parameters were updated for each batch of 10 images. This training was performed using a computing cluster with 6 cores, 1 GPU, 56 GB of RAM, and 360 GB of disk space. The overall training process took 1 hour and 10 minutes to complete. (LeCun et al., 2015).
## 5 Finetuning and Results
### YOLOv5 Model
In this work, the trained YOLOv5 AutoML model achieved a mean average precision (mAP) score of 0.96. The mAP score is a metric that is commonly used to evaluate the performance of object detection models. It measures the average precision across all classes of objects in the dataset and takes into account the overall precision and recall of the model. Precision is a measure of the accuracy of the model's predictions and defined as the number of correct predictions divided by the total number of predictions. In comparison, recall calculates the model's ability to capture all relevant instances in its predictions. It can be determined by dividing the number of correct predictions by the total number of instances in the actual data (LeCun et al., 2015).
In this case, the YOLOv5 model had a precision value of 0.84 and a recall score of 0.99 when using an IOU validation threshold of 0.55. The F1 score, which is a measure of the harmonic mean of precision and recall, was also calculated and found to be 0.904. The importance of precision, recall, and the F1 score lie in their ability to provide a comprehensive evaluation of the model's performance. High precision is essential for ensuring that the model does not produce false positives. A high recall is essential for ensuring that the model does not produce false negatives. The F1 score, which takes into account both precision and recall, provides a balanced evaluation of the model's performance (LeCun et al., 2015). Below displays the formulas mentioned above which consider the True Positive (_TP_), False Positive (_FP_), and False Negative (_FN_):
precision \[=\frac{TP}{TP+FP}\] (1) recall \[=\frac{TP}{TP+FN}\] (2) F1 Score \[=\frac{2\text{core precision}\text{recall}}{\text{precision}+ \text{recall}}\] (3)
The model itself returns back the bounding box coordinates. When integrating the model into the pipeline, we conduct post-processing to draw and visualize the bounding boxes on top of the input image. Figure 12 displays the output of the cotton bloom detected image from the AI model.
### Azure Data Factory
In order to connect the trained AI model into the rest of the Azure Data Factory Pipeline, we first created a Standard Kubernetes cluster. We then deployed the model into Kubernetes which provides a REST API to interact with.
#### 5.2.1 REST API vs. Blob Storage Ingestation
Previously, we ingested data from Azure Blob Storage into Azure Data Lake to demonstrate the feasibility of ingesting data from external IoT devices. To assess the performance of the ingestion process, we conducted an experiment using image data and the REST API connection provided by the Data Lake. Initially, we utilized the popular API development and testing tool, Postman, to conduct a synchronous request and observed a substantial improvement in ingestion time. It is commonly used for testing and debugging API applications, and can be used to make both synchronous and
Figure 11: Figure illustrates the definition of IOU which takes into account the area of overlap and the area of union. The higher the area of overlap between the detected bounding box and the ground truth box, the higher the IOU.
Figure 12: Example of pipeline output after post-processing and adding bounding boxes for cotton bloom detection visualization.
Figure 10: Table displays results from tuning hyperparameters. The F1 score and mAP was the highest when utilizing the large YOLOv5 model with a threshold 0.55. We also tuned the number of epochs, but AutoML would terminate after 30 epochs due to no significant improvement.
asynchronous requests (Fielding, 2000). The implementation of this reduced the stream ingestation time from 12 seconds to 150 ms. This not only highlights the applicability of the REST API connection but also its efficiency in speeding up the ingestion process.
While Postman is a useful tool for testing and debugging APIs, it is not the only option for making HTTP requests to devices. To scale up for batch processing, we adopted asynchronous Python code for HTTP connection. The original ingestation time from blob storage to the data lake took around 2 minutes for 9,000 images (157.6 MB). With the optimization of the REST API and asynchronous Python code, the batch ingestion process was completed in just 8.62 seconds, a marked improvement from the previous ingestion time.
For future purposes, one can use the REST API and HTTP connection with other devices and systems (mobile devices or IoT devices). The pipeline is compatible with the integration of machine learning models into a wide range of applications and systems.
#### 5.2.2 _Kubernetes_
We also optimized our Kubernetes configurations by increasing the number of nodes and node pools. When testing on a smaller batch amount of 100 images, integrating 5 nodes rather than 3 nodes in the Kubernetes cluster decreases runtime from 32 minutes to 28 minutes. Increasing the number of node pools from 1 pool to 2 pools decreased runtime from 28 minutes to 22 minutes.
#### 5.2.3 _Asynchronous vs. Synchronous Processing_
Asynchronous programming allows the execution of multiple tasks to run concurrently without waiting for the completion of prior tasks. The asyncio library, a built-in library in Python, provides the infrastructure for writing asynchronous code with the help of the async and await keywords (Foundation, 2023). Additionally, the aiohttp library enables asynchronous support for HTTP requests, allowing for concurrent processing of multiple requests without waiting for responses (aio libs, 2023b).
The aiofiles library, on the other hand, offers asynchronous support for file operations such as reading and writing to files. This can be useful in programs that need to perform numerous file operations simultaneously, such as our program that handles a significant amount of images aio libs (2023a). Our pipeline runtime for processing 9,000 batch images was found to take approximately 3 hours and 50 minutes with synchronous code. After optimizing the pipeline with asynchronous code, the execution time was reduced to 34 minutes, which represents a substantial improvement. This demonstrates the potential benefits of implementing asynchronous processing in our pipeline
### Cost
Although Azure is not an open-source environment, the pay-as-you-go service makes sure to charge resources that are effectively used. With Microsoft Azure, we can spin a 100-node Apache Spark Cluster in less than ten minutes and pay only for the time the job runs on the specific cluster (Rawar and Narain, 2018).
We used a computer cluster that had the GPU infrastructure for the YOLOv5 training. This costs $1.14 per hour. The total time spent training was 1 hour and 6 minutes. The total cost is as follows: using the virtual machines led to a cost of about $3.56, storage cost $2.18, container costs were $1.85, utilizing a virtual network was $1.33, Azure Databricks connection was $0.30, and Azure Data factory led to an additional cost of $0.30. Furthermore, the Kubernetes cluster deployment was the most costly item as ranges roughly about $70 monthly.
## 6 Future Directions
The pipeline can be further optimized by updating computing clusters with higher computing power and incorporating GPU processing to reduce the total processing time. Moreover, the pipeline currently takes approximately three minutes to reactivate the terminated Databricks interactive cluster, which could be improved through the use of pools.
A bottleneck encountered during the data processing was the connection to the Internet to send images to the Kubernetes cluster through REST API. To address this issue, we can utilize Databricks MLFlow by downloading the model within the Databricks environment itself rather than having to create a separate Internet connection. We refer back to Figure 8 to gain a better understanding of the bottleneck at step 3 to where the cluster must create an Internet connection to the REST API URL. If we wanted to scale up with more nodes, the price of Kubernetes would increase to even up to $1,000 monthly. This further suggests the benefit of utilizing Databricks MLFlow and downloading the model itself rather than using Kubernetes' REST API for AI Model connection. Another bottleneck encountered is the limitations of OpenCV when drawing bounding boxes. Despite our efforts to optimize results through asynchronous Python code, OpenCV does not have the capability for asynchronous processing. As a result, it is incapable of performing the task of producing bounding boxes concurrently for images. This is because OpenCV relies heavily on the CPU, which is fully operated without waiting for any external input. This results in a linear process when drawing bounding boxes, despite the rest of the code being optimized for concurrent execution. To overcome this issue, we can incorporate PySpark, a Python library for distributed data processing using Apache Spark. PySpark allows us to leverage the power of Spark, which is a distributed computing platform that enables fast and flexible data processing. This is compatible with our pipeline because our Databricks runtime version 11.3 LTS includes Apache Spark 3.3.0, and Scala 2.12. With the use of PySpark, we can employ the parallel computing power of our Databricks cluster and enhance the speed and efficiency of our data processing operations.
Overall, these optimization strategies can be used to scale up the pipeline and decrease the total processing time, making it more efficient and effective for handling much larger
datasets.
## 7 Conclusion
This study has presented a new big data pipeline for cotton bloom detection using a Lambda architecture and Microsoft Azure's cloud computing resources. The pipeline fulfills data preprocessing, object detection using a YOLOv5 neural network trained through Azure AutoML, and visualization of object detection bounding boxes. The results of the study demonstrate the high performance of the neural network with a Mean Average Precision (mAP) score of 0.96 and an optimized runtime of 34 minutes when evaluated on over 9,000 images. This work showcases the scalability of the presented pipeline as a solution for deep learning-based object detection and emphasizes on the potential of employing cloud computing resources for big data processing in precision agriculture. This study advances the field by expanding and demonstrating the big data pipeline implementation of a new method for cotton bloom detection from images collected on a cotton farm. The results obtained in this study suggest a scalable Lambda architecture that can be implemented for big data processing using Azure resources.
## Acknowledgement
The authors would like to thank Canicius Mwitta for his assistance in setting up the experiments and data collection.
|
2303.14878 | GPT-PINN: Generative Pre-Trained Physics-Informed Neural Networks toward
non-intrusive Meta-learning of parametric PDEs | Physics-Informed Neural Network (PINN) has proven itself a powerful tool to
obtain the numerical solutions of nonlinear partial differential equations
(PDEs) leveraging the expressivity of deep neural networks and the computing
power of modern heterogeneous hardware. However, its training is still
time-consuming, especially in the multi-query and real-time simulation
settings, and its parameterization often overly excessive. In this paper, we
propose the Generative Pre-Trained PINN (GPT-PINN) to mitigate both challenges
in the setting of parametric PDEs. GPT-PINN represents a brand-new
meta-learning paradigm for parametric systems. As a network of networks, its
outer-/meta-network is hyper-reduced with only one hidden layer having
significantly reduced number of neurons. Moreover, its activation function at
each hidden neuron is a (full) PINN pre-trained at a judiciously selected
system configuration. The meta-network adaptively ``learns'' the parametric
dependence of the system and ``grows'' this hidden layer one neuron at a time.
In the end, by encompassing a very small number of networks trained at this set
of adaptively-selected parameter values, the meta-network is capable of
generating surrogate solutions for the parametric system across the entire
parameter domain accurately and efficiently. | Yanlai Chen, Shawn Koohy | 2023-03-27T02:22:09Z | http://arxiv.org/abs/2303.14878v3 | GPT-PINN: Generative Pre-Trained Physics-Informed Neural Networks toward non-intrusive Meta-learning of parametric PDEs+
###### Abstract
Physics-Informed Neural Network (PINN) has proven itself a powerful tool to obtain the numerical solutions of nonlinear partial differential equations (PDEs) leveraging the expressivity of deep neural networks and the computing power of modern heterogeneous hardware. However, its training is still time-consuming, especially in the multi-query and real-time simulation settings, and its parameterization often overly excessive. In this paper, we propose the Generative Pre-Trained PINN (GPT-PINN) to mitigate both challenges in the setting of parametric PDEs. GPT-PINN represents a brand-new meta-learning paradigm for parametric systems. As a network of networks, its outer-/meta-network is hyper-reduced with only one hidden layer having significantly reduced number of neurons. Moreover, its activation function at each hidden neuron is a (full) PINN pre-trained at a judiciously selected system configuration. The meta-network adaptively "learns" the parametric dependence of the system and "grows" this hidden layer one neuron at a time. In the end, by encompassing a very small number of networks trained at this set of adaptively-selected parameter values, the meta-network is capable of generating surrogate solutions for the parametric system across the entire parameter domain accurately and efficiently.
## 1 Introduction
The need to efficiently and accurately understand the behavior of the system under variation of a large number of underlying parameters is ubiquitous in _many query_ type of applications e.g. uncertainty quantification, (Bayesian) inverse problems, data assimilation or optimal control/design. The parameters of interest may include material properties, wave frequencies, uncertainties, boundary conditions, the shape of the domain, etc. A rigorous study of the behavior of the system and its dependence on the parameters requires thousands, perhaps millions of simulations of the underlying partial differential equations (PDE). Each accurate and robust simulation of the underlying complex physical phenomena is often time consuming, and the massively repeated simulations needed become computationally challenging, if not entirely untenable, when using traditional numerical methods. Two techniques stand out in addressing this challenge, the more traditional and rigorous reduced order modeling and the more nascent deep neural networks.
The reduced basis method (RBM) [27, 35, 15, 41, 20, 13] belongs to the first category. It was developed to generate a computational emulator for parameterized problems whose error compared
to the full problem is certifiable; this rigorous accuracy guarantee is a relatively unique ability among reduced order model algorithms. Once generated, the RBM emulator, using the results of the original method at carefully preselected parameter values, can typically compute an accurate solution with orders-of-magnitude less computational cost than the original method. This is achieved through an offline-online decomposition, where the parameter values are selected and reduced solution space constructed in an offline preparation/training phase via a greedy algorithm, allowing the rapid online computation of the surrogate solution for any parameter values.
Deep learning algorithms are increasingly popular in this context as well. Using data generated by many queries of the underlying system, one can train a deep neural network (DNN, a highly nonlinear function composed of layers of parameterized affine linear functions and simple nonlinear operations) to provide a surrogate for the parameter to solution map. Physics-informed neural networks (PINNs), popularized by [38], adopt DNNs to represent the approximate solutions of PDEs. Unlike typical data-driven deep learning methods that do not build in physical understanding of the problem, PINNs incorporate a strong physics prior (i.e. PDEs) that constrains the output of the DNN. The key advantages of PINNs, over traditional numerical solvers, include that they are able to solve the PDE without discretizing the problem domain, that they define a function over the entire continuous spatial-temporal domain, and that they can rely on automatic differentiation [3, 26] toward residual minimization. Thanks also to the enormous advances in computational capabilities in recent years [1, 40], PINNs have emerged as an increasingly popular alternative to traditional numerical methods for PDEs.
However, issues of PINNs remain [43]. Among them, vanilla PINNs are usually significantly slower than the classic numerical methods due to the training of the, what is usually over-parameterized, neural network. The main purpose of this paper is to use strategies inspired by the classical and mathematically rigorous RBM techniques to significantly shrink the size of PINNs and accelerate solving parametric PDEs with PINNs. Just like RBM, the proposed solvers have an initial investment cost. However, they are capable of providing significant computational savings in problems where a PDE must be solved repeatedly or in real-time thanks to the fact that their marginal cost is of orders of magnitude lower than that of each PINN solve.
The jump from the vanilla PINN to the proposed Generative Pre-Trained PINN (GPT-PINN) parallels that from the traditional Finite Element Method (FEM) to RBM. To the best of our knowledge, it represents a first-of-its-kind meta-learning approach for parametric systems. Its infrastructure is a network of networks. The inner networks are the full PINNs. Its outer-/meta-network is hyper-reduced with only one hidden layer where the inner networks are pre-trained and serve as activation functions. The meta-network adaptively "learns" the parametric dependence of the system and "grows" this hidden layer one neuron/network at a time. In the end, by encompassing a very small number of networks trained at this set of adaptively-selected parameter values, the meta-network is capable of generating surrogate solutions for the parametric system across the entire parameter domain accurately and efficiently, with a cost independent of the size of the full PINN. The design of network architecture represent the first main novelty of the paper. To the best of our knowledge, this is the first time whole (pre-trained) networks are used as the activation functions of another network. The adoption of the training loss of the meta-network as an error indicator, inspired by the residual-based error estimation for traditional numerical solvers such as FEM, represents the second main novelty.
The rest of the paper is organized as follows. In Section 2, we review the RBM and PINN and also remark on recent efforts about accelerating PINNs in the parametric PDE setting. In Section 3, we detail our design of GPT-PINN. We present numerical results on three parametric PDEs in Section 4 demonstrating the accuracy and efficiency of the proposed GPT-PINN. Finally, concluding remarks are given in Section 5.
Background
### Reduced Basis Method
RBM is a linear reduction method that has been a popular option for rigorously and efficiently simulating parametric PDEs. Its hallmark feature is a greedy algorithm embedded in an offline-online decomposition procedure. The offline (i.e. training) stage is devoted to a judicious and error estimate-driven exploration of the parameter-induced solution manifold. It selects a number of representative parameter values via a mathematically rigorous greedy algorithm [4]. During the online stage, a _reduced_ solution is sought in the terminal surrogate space for each unseen parameter value. Moreover, unlike other reduction techniques (e.g. proper orthogonal decomposition (POD)-based approaches), the number of full order inquiries RBM takes offline is minimum, i.e. equal to the dimension of the surrogate space. To demonstrate the main ideas, we consider a generic parameterized PDE as follows,
\[\mathcal{F}(u;\mathbf{x},\boldsymbol{\mu})=f,\quad x\in\Omega\subseteq\mathbb{ R}^{d}. \tag{1}\]
Here \(\mathcal{F}\) encodes a differential operator parameterized via \(\boldsymbol{\mu}\in\mathcal{D}\subset\mathbb{R}^{d_{s}}\) together with necessary boundary and initial conditions. The parameter can be equation coefficients, initial values, source terms, or uncertainties in the PDE for the tasks of the uncertainty quantification, etc. \(\mathcal{F}\) can depend on the solution and its (space- and time-) derivatives of various orders. We assume that we have available a numerical solution \(u(\mathbf{x};\boldsymbol{\mu})\in X_{h}\) obtained by a high fidelity solver, called Full Order Model (FOM) and denoted as \(\text{FOM}(\boldsymbol{\mu},X_{h})\), and \(X_{h}\) is the discrete approximation space the numerical solution \(u\) belongs to.
A large number of queries of \(u(\cdot;\boldsymbol{\mu})\) can be prohibitively expensive because the \(\text{FOM}(\boldsymbol{\mu},X_{h})\) has to be called many times. Model order reduction (MOR) aims to mitigate this cost by building efficient surrogates. One idea is to study the map
\[\boldsymbol{\mu}\mapsto u(\cdot,\boldsymbol{\mu})\in X_{h}\]
and devise an algorithm to compute an approximation \(u_{N}(\cdot,\mu)\) from an \(N\)-dimensional subspace \(X_{N}\) of \(X_{h}\), such that
\[u_{N}(\cdot,\boldsymbol{\mu})\approx u(\cdot,\boldsymbol{\mu})\text{ for all }\boldsymbol{\mu}\in\mathcal{D}\]
This reduced order model (ROM) formulation at a given \(\boldsymbol{\mu}\) is denoted by \(\text{ROM}(\boldsymbol{\mu},X_{N})\), and is much cheaper to solve than \(\text{FOM}(\boldsymbol{\mu},X_{h})\) and can be conducted during the Online stage.
```
0: A (random or given) \(\boldsymbol{\mu}^{1}\), training set \(\Xi\subset\mathcal{D}\).
0: Solve \(\text{FOM}(\boldsymbol{\mu}^{1},X_{h})\) and set \(X_{1}=\text{span}\left\{u(\cdot;\boldsymbol{\mu}_{1})\right\}\), \(n=2\).
1:whilestopping criteria not met,do
2: Solve \(\text{ROM}(\boldsymbol{\mu},X_{n-1})\) for all \(\boldsymbol{\mu}\in\Xi\) and compute error indicators \(\Delta_{n-1}(\boldsymbol{\mu})\).
3: Choose \(\boldsymbol{\mu}^{n}=\operatorname*{argmax}_{\boldsymbol{\mu}\in\Xi}\Delta_{n- 1}(\boldsymbol{\mu})\).
4: Solve \(\text{FOM}(\boldsymbol{\mu}_{n},X_{h})\) and update \(X_{n}=X_{n-1}\bigoplus\{u(\cdot;\boldsymbol{\mu}_{n})\}\).
5: Set \(n\gets n+1\).
6:endwhile
0: Reduced basis set \(X_{N}\), with \(N\) being the terminal index.
```
**Algorithm 1** Classical RBM for parametric PDE (1): Offline stage
The success of RBM relies on the assumption that \(u(\cdot;\mathcal{D})\) has small _Kolmogorov \(N\)-width_[31], defined as
\[d_{N}\left[u\left(\cdot;\mathcal{D}\right)\right]\coloneqq\inf_{\begin{subarray}{ c}X_{N}\subset X_{h}\\ \dim X_{N}=N\end{subarray}}\ \sup_{\mathbf{\mu}\in\mathcal{D}}\ \inf_{v\in X_{N}}\left\|u(\cdot,\mathbf{\mu})-v \right\|_{X}.\]
A small \(d_{N}\) means that the solution to eq. (1) for any \(\mathbf{\mu}\) can be well-approximated from \(X_{N}\) that represents the outer infimum above. The identification of a near-infimizing subspace \(X_{N}\) is one of the central goals of RBM, and is obtained in the so-called Offline stage. RBM uses a greedy algorithm to find such \(X_{N}.\) The main ingredients are presented in Algorithm 1. The method explores the training parameter set \(\Xi\subset\mathcal{D}\) guided by an error estimate or an efficient and effective error indicator \(\Delta_{n}(\mathbf{\mu})\) and intelligently choosing the parameter ensemble \(\{\mathbf{\mu}^{n}\}_{n=1}^{N}\) so that
\[X_{N}\coloneqq\operatorname{span}\left\{u(\cdot;\mathbf{\mu}^{n})\right\}_{n=1}^{ N},\text{ and }u_{N}(\cdot,\mathbf{\mu})=\sum_{n=1}^{N}c_{n}(\mathbf{\mu})u(\cdot,\mathbf{\mu}^{n}). \tag{2}\]
An offline-online decomposed framework is key to realize the speedup.
Equipped with this robust and tested greedy algorithm, physics-informed reduced solver, rigorous error analysis, and certifiable convergence guarantees, RBM algorithms have become the go-to option for efficiently simulating parametric PDEs and established in the modern scientific computing toolbox [32, 27, 35, 15] and have benefited from voluminous research with theoretical and algorithmic refinement [25, 30, 2, 24, 41, 20]. One particular such development was the empirical error indicator of the L1-based RBM by Chen and his collaborators [6] where \(\Delta_{n-1}(\mu)\) was taken to be \(\left\|\mathbf{c}(\mu)\right\|_{1}\). Here \(\mathbf{c}(\mu)\) is the coefficient vector of \(u_{N}(\cdot,\mu)\) under the basis \(\left\{u(\cdot;\mu_{n})\right\}_{n=1}^{N}\) and \(\left\|\cdot\right\|_{1}\) represents the \(\ell^{1}\)-norm. As shown in [6, 5], \(\mathbf{c}(\mu)\) represents a Lagrange interpolation basis in the parameter space implying that the indicator \(\Delta_{n}\) represents the corresponding Lebesgue constant. The L1 strategy to select the parameter samples then controls the growth of the Lebesgue constants and hence is key toward accurate interpolation. This strategy, "free" to compute albeit not as traditionally rigorous, inspires the greedy algorithm of our GPT-PINN, to be detailed in Section 3.
### Deep neural networks
Deep neural networks (DNN) have seen tremendous success recently when serving as universal approximators to the solution function (or certain quantity of interest (QoI) / observable) [17, 29, 9, 16, 36, 7, 46, 42]. First proposed in [17] on an underlying collocation approach, it has been successfully used recently in different contexts. See [37, 39, 18, 23, 9, 8, 14] and references therein. For a nonparametrized version (e.g. eq. (1) with a fixed parameter value), we search for a neural network \(\Psi_{\mathsf{NN}}(\mathbf{x})\) which maps the coordinate \(\mathbf{x}\in\mathbb{R}^{d}\) to a surrogate of the solution, that is \(\Psi_{\mathsf{NN}}(\mathbf{x})\approx u(\mathbf{x}).\)
Specifically, for an input vector \(\mathbf{x}\), a feedforward neural network maps it to an output, via layers of "neurons" with layer \(k\) corresponding to an affine-linear map \(C_{k}\) composed with scalar non-linear activation functions \(\sigma\)[12]. That is,
\[\Psi_{\mathsf{NN}}^{\theta}(\mathbf{x})=C_{K}\circ\sigma\circ C_{K-1}\ldots \ldots\circ\sigma\circ C_{1}(\mathbf{x}).\]
A justifiably popular choice is the _ReLU_ activation \(\sigma(z)=\max(z,0)\) that is understood as component-wise operation when \(z\) is a vector. For any \(1\leq k\leq K\), we define
\[C_{k}z_{k}=W_{k}z_{k}+b_{k},\quad\text{for }W_{k}\in\mathbb{R}^{d_{k+1}\times d _{k}},z_{k}\in\mathbb{R}^{d_{k}},b_{k}\in\mathbb{R}^{d_{k+1}}.\]
To be consistent with the input-output dimension, we set \(d_{1}=d\) and \(d_{K}=1\). We concatenate the tunable weights and biases for our network and denote them as
\[\theta=\{W_{k},b_{k}\},\quad\forall\ 1\leq k\leq K.\]
We have \(\theta\in\Theta\subset\mathbb{R}^{M}\) with \(M=\sum\limits_{k=1}^{K-1}(d_{k}+1)d_{k+1}\). We denote this network by
\[\mathsf{NN}(d_{1},d_{2},\cdots,d_{K}). \tag{3}\]
Learning \(\Psi^{\theta}_{\mathsf{NN}}(\mathbf{x})\) then amounts to generating training data and determining the weights and biases \(\theta\) by optimizing a loss function using this data.
### Physics-Informed Neural Network
We define our problem on the spatial domain \(\Omega\subset\mathbb{R}^{d}\) with boundary \(\partial\Omega\), and consider time-dependent PDEs with order of time-derivative \(k=1\) or \(2\).
\[\begin{split}\frac{\partial^{k}}{\partial t^{k}}u(\mathbf{x},t)+ \mathcal{F}\left[u(\mathbf{x},t)\right]&=0\qquad\qquad\mathbf{x} \in\Omega,\qquad\quad t\in[0,T],\\ \mathcal{G}(u)(\mathbf{x},t)&=0\qquad\qquad\mathbf{x} \in\partial\Omega,\qquad\quad t\in[0,T],\\ u(\mathbf{x},0)&=u_{0}(\mathbf{x})\qquad\quad \mathbf{x}\in\Omega.\end{split} \tag{4}\]
Here \(\mathcal{F}\) is a differential operator as defined in Section 2.1 and \(\mathcal{G}\) denotes a boundary operator. The goal of a PINN is to identify an approximate solution \(u(\mathbf{x},t)\) via a neural network \(\Psi^{\theta}_{\mathsf{NN}}(\mathbf{x},t)\). Learning \(\theta\in\mathbb{R}^{M}\) requires defining a loss function whose minimum \(\theta^{*}\) leads to \(\Psi^{\theta^{*}}_{\mathsf{NN}}\) approximating the solution to the PDE over the problem domain. PINN defines this loss as a sum of three parts, an integral of the local residual of the differential equation over the problem domain, that over the boundary, and the deviation from the given initial condition,
\[\mathcal{J}(u)=\int_{\Omega}\left\|\frac{\partial^{k}}{\partial t^{k}}u( \mathbf{x},t)+\mathcal{F}(u)(\mathbf{x},t)\right\|_{2}^{2}+\left\|u(\mathbf{x },0)-u_{0}(\mathbf{x})\right\|_{2}^{2}\,dx+\int_{\partial\Omega}\left\| \mathcal{G}(u)(\mathbf{x},t)\right\|_{2}^{2}\,dx.\]
During training, we sample collocation points in certain fashion from the PDE space domain \(\Omega\), space-time domain \(\Omega\times(0,T)\), and boundary \(\partial\Omega\times[0,T]\), \(\mathcal{C}_{o}\subset\Omega\times(0,T)\) and \(\mathcal{C}_{\partial}\subset\partial\Omega\times[0,T]\) and \(\mathcal{C}_{i}\subset\Omega\), and use them to form an approximation of the true loss.
\[\begin{split}\mathcal{L}_{\text{PINN}}(\Psi^{\theta}_{\mathsf{NN} })=&\frac{1}{|\mathcal{C}_{o}|}\sum_{(\mathbf{x},t)\in\mathcal{C }_{o}}\left\|\frac{\partial^{k}}{\partial t^{k}}(\Psi^{\theta}_{\mathsf{NN}})( \mathbf{x},t)+\mathcal{F}(\Psi^{\theta}_{\mathsf{NN}})(\mathbf{x},t)\right\|_ {2}^{2}+\\ &\frac{1}{|\mathcal{C}_{\partial}|}\sum_{(\mathbf{x},t)\in\mathcal{ C}_{o}}\left\|\mathcal{G}(\Psi^{\theta}_{\mathsf{NN}})(\mathbf{x},t)\right\|_{2}^{2}+ \frac{1}{|\mathcal{C}_{i}|}\sum_{\mathbf{x}\in\mathcal{C}_{i}}\left\|\Psi^{ \theta}_{\mathsf{NN}}(\mathbf{x},0)-u_{0}(\mathbf{x})\right\|_{2}^{2}.\end{split} \tag{5}\]
When the training converges, we expect that \(\mathcal{L}_{\text{PINN}}(\Psi^{\theta}_{\mathsf{NN}})\) should be nearly zero.
### Related work
The last two to three years have witnessed an increasing level of interest toward metalearning of (parameterized or unparameterized) PDEs due to the need of repeated simulations and the remarkable success of PINNs in its original form or adaptive ones. Here we mention a few representative ones and point out how our method differentiates from theirs.
**Metalearning via PINN parameters.** In [28], the authors adopt statistical (e.g. regression) and numerical (e.g. RBF/spline interpolation) methods to build a surrogate for the map from the PDE parameter \(\mathbf{\mu}\) to the PINN parameter (weights and biases, \(\theta\)). They are shown to be superior than MAML [10] for parameterized PDEs which was shown to outperform LEAP [11] in [34]. Both are general-purpose meta-learning methods. However, the online solver (i.e. regression or interpolation) of [28] ignores the physics (i.e. PDE). Moreover, the assumption is that the \(\mathbf{\mu}\)-variation of the PINN weights and biases is analogous to that of the PDE solution.
**DeepONet.** Aiming to learn nonlinear operators, a DeepONet [19] consists of two sub-networks, a branch net for encoding the input function (e.g source/control term, as opposed to PDE coefficients) at a fixed number of sensors, and a trunk net for encoding the locations for the output functions. It does not build in the physics represented by the dynamical system or PDE. Moreover, it is relatively data-intense by having to scan the entire input function space such as Gaussian random field or orthogonal polynomial space.
**Metalearning loss functions.** Authors of [33] concern the definition of the PINN loss functions. While it is in the parameterized PDE setting, the focus is a gradient-based approach to discover, during the offline stage, better PINN loss functions which are parameterized by e.g. the weights of each term in the composite objective function. The end goal is therefore improved PINN performance e.g. at unseen PDE parameters, due to the learned loss function configuration.
**Metalearning initialization.** In [47], the authors study the use of a meta network, across the parameter domain of a 1-D arc model of plasma simulations, to better initialize the PINN at a new task (i.e. parameter value).
Our proposed GPT-PINN exploits the \(\mathbf{\mu}\)-variation of the PDE solution directly which may feature a Kolmogorov N-width friendlier to MOR approaches, see Figure 1, than the weights and biases. This is, in part, because that the weights and biases lie in a (much) higher dimensional space. Moreover, the meta-network of our approach, being a PINN itself, has physics automatically built in in the same fashion as the underlying PINNs. Lastly, our approach provides a surrogate solution to the unseen parameter values in addition to a better initialization transferred from the sampled PINNs. Most importantly, our proposed GPT-PINN embodies prior knowledge that is mathematically rigorous and PDE-pertinent into the network architecture. This produces strong inductive bias that usually leads to good generalization.
Figure 1: A motivating example showing that the solution matrix of a parametric PDE \(\{u(\cdot,\mathbf{\mu}^{n})\}_{n=1}^{200}\) exhibits fast decay in its singular values (indicating fast decay of the Kolmogorov N-width of the solution manifold) while the network weights and biases manifold \(\{\theta(\mathbf{\mu}^{n})\}_{n=1}^{200}\) does not.
The GPT-PINN framework
Inspired by the RBM formulation eq. (2), we design the GPT-PINN. Its two components and design philosophy are depicted in Figure 2. As a hyper-reduced feedforward neural network \(\mathsf{NN}(2,n,1)\) with \(1\leq n\leq N\) (see eq. (3) for the notation), we denoted it by \(\mathsf{NN}^{\mathsf{r}}(2,n,1)\). A key feature is that it has customized activation function in the neurons of its sole hidden layer. These activation functions are nothing but the pre-trained PINNs for the corresponding PDEs instantiated by the parameter values \(\{\boldsymbol{\mu}^{1},\boldsymbol{\mu}^{2},\cdots,\boldsymbol{\mu}^{n}\}\) chosen by a greedy algorithm that is specifically tailored for PINNs but inspired by the classical one adopted by RBM in Algorithm 1. The design of network architecture represents the first main novelty of the paper. To the best of our knowledge, this is the first time a whole (pre-trained) network is used as the activation function of one neuron.
### The online solver of GPT-PINN
We first present the online solver, i.e. the training of the reduced network \(\mathsf{NN}^{\mathsf{r}}(2,n,1)\), for any given \(\boldsymbol{\mu}\). With the next subsection detailing how we "grow" the GPT-PINN offline from \(\mathsf{NN}^{\mathsf{r}}(2,n,1)\) to \(\mathsf{NN}^{\mathsf{r}}(2,n+1,1)\), we have a strategy of adaptively generating the terminal GPT-PINN, \(\mathsf{NN}^{\mathsf{r}}(2,N,1)\). Indeed, given the simplicity of the reduced network, to train the weights \(\{c_{1}(\boldsymbol{\mu}),\cdots,c_{n}(\boldsymbol{\mu})\}\), no backpropagation is needed. The reason is that the loss function, similar to eq. (5), is a simple function containing directly and explicitly \(\{c_{1}(\boldsymbol{\mu}),\cdots,c_{n}(\boldsymbol{\mu})\}\) thanks to the reduced network structure of GPT-PINN. In fact, we denote by \(\Psi_{\mathsf{NN}}^{\theta^{i}}(x,t)\) the PINN approximation of the PDE solution when
Figure 2: The GPT-PINN architecture. A hyper-reduced network adaptively embedding pre-trained PINNs at the nodes of its sole hidden layer. It then allows a quick online generation of a surrogate solution at any given parameter value.
\(\mathbf{\mu}=\mathbf{\mu}^{i}\). Given that \(u_{n}(\mathbf{x},t;\mathbf{\mu})\approx\sum_{i=1}^{n}c_{i}(\mathbf{\mu})\Psi_{\mathsf{NN} }^{\theta^{i}}(x,t)\), we have that
\[\begin{split}\mathcal{L}_{\mathrm{PINN}}^{\mathrm{GPT}}(\mathbf{c} (\mathbf{\mu}))&=\frac{1}{|\mathcal{C}_{o}^{r}|}\sum_{(\mathbf{x},t) \in\mathcal{C}_{o}}\left\|\frac{\partial^{k}}{\partial t^{k}}\left(\sum_{i=1}^ {n}c_{i}(\mathbf{\mu})\Psi_{\mathsf{NN}}^{\theta^{i}}\right)(\mathbf{x},t)+ \mathcal{F}\left(\sum_{i=1}^{n}c_{i}(\mathbf{\mu})\Psi_{\mathsf{NN}}^{\theta^{i}} \right)(\mathbf{x},t)\right\|_{2}^{2}+\\ &\frac{1}{|\mathcal{C}_{o}^{r}|}\sum_{(\mathbf{x},t)\in\mathcal{ C}_{o}}\left\|\mathcal{G}\left(\sum_{i=1}^{n}c_{i}(\mathbf{\mu})\Psi_{\mathsf{NN}}^{ \theta^{i}}\right)(\mathbf{x},t)\right\|_{2}^{2}+\frac{1}{|\mathcal{C}_{i}^{ r}|}\sum_{\mathbf{x}\in\mathcal{C}_{i}}\left\|\sum_{i=1}^{n}c_{i}(\mathbf{\mu}) \Psi_{\mathsf{NN}}^{\theta^{i}}(\mathbf{x},0)-u_{0}(\mathbf{x})\right\|_{2}^ {2}.\end{split} \tag{6}\]
The online collocation sets \(\mathcal{C}_{o}^{r}\subset\Omega\times(0,T)\), \(\mathcal{C}_{o}^{r}\subset\partial\Omega\times[0,T]\) and \(\mathcal{C}_{i}^{r}\subset\Omega\) are used, similar to eq. (5), to generate an approximation of the true loss. They are taken to be the same as their full PINN counterparts \(\mathcal{C}_{o},\mathcal{C}_{\partial},\mathcal{C}_{i}\) in this paper but we note that they can be fully independent. The training of \(\mathsf{NN}^{r}(2,n,1)\) is then simply
\[\mathbf{c}\leftarrow\mathbf{c}-\delta_{r}\nabla_{\mathbf{c}}\mathcal{L}_{ \mathrm{PINN}}^{\mathrm{GPT}}(\mathbf{c}) \tag{7}\]
Here \(\mathbf{c}=(c_{1}(\mathbf{\mu}),\cdots,c_{n}(\mathbf{\mu}))^{T}\) and \(\delta_{r}\) is the online learning rate. The detailed calculations of eq. (6) and eq. (7) are given in A for the first numerical example. Those for the other examples are very similar and thus omitted. We make the following three remarks to conclude the online solver.
**1. Precomputation for fast training of \(\mathsf{NN}^{r}(2,n,1)\):** Due to the linearity of the derivative operations and the collocation nature of loss function, a significant amount of calculations of eq. (6) can be precomputed and stored. These include the function values and all (spatial and time) derivatives involved in the operators \(\mathcal{F}\) and \(\mathcal{G}\) of the PDE eq. (4):
\[\Psi_{\mathsf{NN}}^{\theta^{i}}(\mathcal{C}),\,\frac{\partial^{k}}{\partial t ^{k}}\left(\Psi_{\mathsf{NN}}^{\theta^{i}}\right)(\mathcal{C})\left(k=1\text{ or }2\right),\,\nabla_{\mathbf{x}}^{\ell}\Psi_{\mathsf{NN}}^{\theta^{i}}( \mathcal{C})\left(\ell=1,2,\cdots\right)\text{ for }\mathcal{C}= \mathcal{C}_{o}^{r},\mathcal{C}_{\partial}^{r},\mathcal{C}_{i}^{r}. \tag{8}\]
Once these are precomputed, updating \(\mathbf{c}\) according to eq. (7) is very efficient. It can even be made independent of \(|\mathcal{C}|\).
**2. Non-intrusiveness of GPT-PINN:** It is clear that, once the quantities of eq. (8) are extracted from the full PINN, the online training of \(\mathsf{NN}^{r}(2,n,1)\) is independent of the full PINN. GPT-PINN is therefore non-intrusive of the Full Order Model. One manifestation of this property is that, as shown in our third numerical example, the full PINN can be adaptive while the reduced PINN may not be.
**3. The error indication of \(\mathsf{NN}^{r}(2,n,1)\).** One prominent feature of RBM is its _a posteriori_ error estimators/indicators which guides the generation of the reduced solution space and certifies the accuracy of the surrogate solution. Inspired by this classical design, we introduce the following quantity that measures how accurate \(\mathsf{NN}^{r}(2,n,1)\) is in generating a surrogate network at a new parameter \(\mathbf{\mu}\).
\[\Delta_{\mathsf{NN}}^{r}(\mathbf{c}(\mathbf{\mu}))\triangleq\mathcal{L}_{\mathrm{ PINN}}^{\mathrm{GPT}}(\mathbf{c}(\mathbf{\mu})). \tag{9}\]
We remark that this quantity is essentially free since it is readily available when we train \(\mathsf{NN}^{r}(2,n,1)\) according to eq. (7). The adoption of the training loss of the meta-network as an error indicator, inspired by the residual-based error estimation for traditional numerical solvers such as FEM, represents the second main novelty of this paper.
### Training the reduced network GPT-PINN: the greedy algorithm
With the online solver described in Section 3.1, we are ready to present our greedy algorithm. Its main steps are outlined in Algorithm 2. The meta-network adaptively "learns" the parametric
dependence of the system and "grows" its sole hidden layer one neuron/network at a time in the following fashion. We first randomly select, in the discretized parameter domain, one parameter value \(\mathbf{\mu}^{1}\) and train the associated (highly accurate) PINN \(\Psi_{\mathsf{NN}}^{\theta^{1}}\). The algorithm then decides how to "grow" its meta-network by scanning the entire discrete parameter space and, for each parameter value, training this reduced network (of 1 hidden layer with 1 neuron \(\Psi_{\mathsf{NN}}^{\theta^{1}}\)). As it scans, it records an error indicator \(\Delta_{\mathsf{NN}}^{r}(\mathbf{c}(\mathbf{\mu}))\). The next parameter value \(\mathbf{\mu}^{2}\) is the one generating the largest error indicator. The algorithm then proceeds by training a full PINN at \(\mathbf{\mu}^{2}\) and therefore grows its hidden layer into two neurons with customized (but pre-trained) activation functions \(\Psi_{\mathsf{NN}}^{\theta^{1}}\) and \(\Psi_{\mathsf{NN}}^{\theta^{2}}\). This process is repeated until the stopping criteria is met which can be either that the error indicator is sufficiently small or a pre-selected size of the reduced network is met. At every step, we select the parameter value that is approximated most badly by the current meta-network.
We end by presenting how we initialize the weights \(\mathbf{c}(\mathbf{\mu})\) when we train \(\mathsf{NN}^{r}(2,n-1,1)\) on Line 3 of Algorithm 2. They are initialized by a linear interpolation of up to \(2^{d_{s}}\) closest neighbors of \(\mathbf{\mu}\) within the chosen parameter values \(\{\mathbf{\mu}^{1},\dots,\mathbf{\mu}^{N}\}\). Recall that \(d_{s}\) is the dimension of the parameter domain.
## 4 Numerical results
In this section, we present numerical results of the GPT-PINN applied to three families of equations, the Klein-Gordon equation, the Burgers' equation, and the Allen-Cahn equation. All simulations are run on a desktop with AMD Ryzen 7 2700X CPU clocked at 4.0GHz, an NVIDIA GeForce RTX 2060 SUPER GPU, and 32 GB of memory. Python version 3.9.12 was used along with common numerical packages and machine learning frameworks such as NumPy (v1.23.4), PyTorch (v1.11.0), TensorFlow (v2.10.0), and for GPU support CUDA v11.6 was installed. Previous literature [44, 21, 22, 45] has shown common difficulties in the use of baseline (non-adaptive) PINNs for the approximation of the Allen-Cahn equations. We have therefore adopted the Self-Adaptive PINNs (SA-PINNs) formulated by [22] in section 4.3 to acquire accurate approximations by the full PINN, later used by the GPT-PINN. Throughout the experiments of sections 4.1 to 4.3, we calculate and report the point-wise absolute errors and the relative L2 errors. They are defined as follows.
\[\left|\mathsf{NN}^{r}(2,N,1)(x,t)-\Psi_{\mathsf{NN}}^{\theta^{i}}(x,t)\right|,\qquad\frac{\left\|\mathsf{NN}^{r}(2,n,1)(x,t)-\Psi_{\mathsf{NN}}^{\theta^{i }}(x,t)\right\|_{2}}{\left\|\Psi_{\mathsf{NN}}^{\theta^{i}}(x,t)\right\|_{2}}.\]
The code for all these examples are published on GitHub at [https://github.com/skoohy/GPT-PINN](https://github.com/skoohy/GPT-PINN).
### The parametric Klein-Gordon Equation
We first test the Klein-Gordon equation parameterized by \((\alpha,\beta,\gamma)\in[-2,-1]\times[0,1]\times[0,1]\),
\[u_{tt}+\alpha u_{xx}+\beta u+\gamma u^{2}+x\cos{(t)}-x^{2}\cos^{2 }{(t)} =0,\quad(x,t)\in[-1,1]\times[0,5], \tag{10}\] \[u(-1,t)=-\cos{(t)}, \quad u(1,t)=\cos{(t)},\] \[u(x,0) =x,\] \[u_{t}(x,0) =0.\]
The full PINN is a \([2,40,40,1]\)-fully connected network with activation function \(\cos{(z)}\) that is trained using uniformly distributed data with \(|\mathcal{C}_{o}|=10,000\), \(|\mathcal{C}_{\partial}|=512\), \(|\mathcal{C}_{i}|=512\). A learning rate of \(0.0005\) is used with the ADAM optimizer and the maximum number of epochs being \(75,000\). The parameter training set is a tensorial grid of size \(10\times 10\times 10\) for a total of \(1000\) parameter values. Up to \(15\) neurons are generated by the greedy algorithm producing GPT-PINNs of sizes \([2,1,1]\) to \([2,15,1]\). The GPT-PINNs are trained at the same set of training points as the full PINN but with a learning rate of \(0.025\) and (much smaller) \(2000\) epochs.
Figure 4: Klein-Gordon Equation: First three full PINN solutions found by the GPT-PINN that are used as the activation functions.
Figure 3: Klein-Gordon Equation training: The adaptively chosen parameter values (Left), worst-case GPT-PINN training losses (Middle), and the Box and Whisker plot of all adaptive GPT-PINN training losses (Right) during the outer-layer greedy training.
The GPT-PINN generates 15 neurons, i.e. full PINNs at \(\{(\alpha_{i},\beta_{i},\gamma_{i})\}_{i=1}^{15}\). These parameter values and the worse-case offline training loss \(\mathcal{L}_{\text{PINN}}^{\text{GPT}}(\mathbf{c}(\boldsymbol{\mu}))\) after 2000 epochs as we increase the number of neurons (i.e. size of \(\mathbf{c}(\boldsymbol{\mu})\)) in the hidden layer of GPT-PINN are shown in Figure 3. Figure 4 shows the first three PINN solutions adaptively selected by GPT-PINN. It is clear that the sampled parameter values are toward the boundaries of the domain and that the decrease in training loss is exponential. Both features are consistent with typical RBM results. Moreover, we emphasize that to achieve 3 digits of accuracy across the parameter domain, we only need to train the full PINN 15 times. This contrast with pure data-driven approaches inherits that of RBM with POD approaches in that RBM requires much less full-order solves. In comparison, we sample the parameter domain uniformly (i.e. without the greedy approach of GPT-PINN), it is clear from Figure 3 Middle that the adaptive "learned neurons" performs 2 to 3 times better than the non-adaptive "uniform neurons". The fact that the latter performs reasonably well underscores the power of our novel idea of using pre-trained PINNs as activation functions.
Next, we test the GPT-PINN on 200 parameter values distinct from the adaptively chosen "learned neurons" and "uniform neurons". Figure 5 displays the largest error for each size of the GPT-PINN. The trend is again exponential. Finally, to show the efficiency of the method, we plot in Figure 5 Right the cumulative run-time when both the full PINN and the (reduced) GPT-PINN are repeatedly called. The starting point of the GPT-PINN line reflects all offline preparation time. It is clear that the GPT-PINN line increases very slowly reflecting the fact that its marginal cost is tiny. In fact, it is about 0.0022 of that of the full PINN. The intersection points reflect how many simulations would it be worthwhile to invest in GPT-PINN. We remark that future work includes driving the intersection point down to essentially comparable to the number of neurons in GPT-PINN, which is the absolute minimum it could be.
Last but not least, we show the training losses as functions of epochs in Figure 6 for both the full PINN and GPT-PINN. We note the interesting phenomenon that the GPT-PINN loss decreases more smoothly than the full PINN. To give a sense of the error distribution, we also plot the point-wise error of the GPT-PINN solution.
Figure 5: Klein-Gordon Equation testing: Worst-case test error of the GPT-PINN of various sizes (Left), Box and Whisker plot of all adaptive GPT-PINN testing erros (Middle), and cumulative run time of the full PINN versus the GPT-PINN (Right).
### The parametric viscous Burgers' Equation
Next, we test GPT-PINN on the Burgers' equation with one parameter, the viscosity \(\nu\in[0.005,1]\).
\[u_{t}+uu_{x}-\nu u_{xx} =0,\quad(x,t)\in[-1,1]\times[0,1],\] \[u(-1,t)=u(1,t) =0, \tag{11}\] \[u(x,0) =-\sin{(\pi x)}.\]
The full PINN is a \([2,20,20,20,20,1]\)-fully connected network with activation function \(\tanh(z)\) that is trained using uniformly distributed data with \(|\mathcal{C}_{o}|=10,000\), \(|\mathcal{C}_{\partial}|=100\), \(|\mathcal{C}_{i}|=100\). A learning rate of \(0.005\) is used with the ADAM optimizer. A maximum number of \(60,000\) epochs is run with a stopping criteria of \(2\times 10^{-5}\) implemented on the loss values. The parameter training set is a uniform grid of size \(129\) in the \(\nu\)-domain. Up to \(9\) neurons are generated by the greedy algorithm producing the reduced GPT-PINNs of sizes \([2,1,1]\) to \([2,9,1]\). The GPT-PINNs are trained at the same set of training points as the full PINN but with a learning rate of \(0.02\) and \(2000\) epochs. The solutions of eq. (11) develop near-discontinuities as time evolves when \(\nu\) is small. In this scenario, \(\left(\Psi_{\textsf{NN}}^{\theta^{i}}\right)_{xx}\) is of little value in the training of GPT-PINN when \(x\) is close to these large gradients. We therefore exclude the collocation points where \(\left|\left(\Psi_{\textsf{NN}}^{\theta^{i}}\right)_{xx}\right|\) is within the top \(20\%\) of all such values. That is
\[\mathcal{C}_{pos}^{r}=\mathcal{C}_{pos}\backslash\left\{x:\left|\left(\Psi_{ \textsf{NN}}^{\theta^{i}}\right)_{xx}(x)\right|>0.8\max_{x}\left|\left(\Psi_{ \textsf{NN}}^{\theta^{i}}\right)_{xx}(x)\right|\right\},\quad pos\in\{o, \partial,i\}.\]
The GPT-PINN generates \(9\) neurons, i.e. full PINNs at \(\{(\nu_{i},\}_{i=1}^{9}\). These parameter values and the worse-case offline training loss \(\mathcal{L}_{\text{PINN}}^{\text{GPT}}(\mathbf{c}(\boldsymbol{\mu}))\) after \(2000\) epochs as we increase the number of neurons (i.e. size of \(\mathbf{c}(\boldsymbol{\mu})\)) in the hidden layer of GPT-PINN are shown in Figure 7. Figure 8
Figure 6: Klein-Gordon Equation: Full PINN training loss (Left) and GTP-PINN training loss (Right) as functions of epochs for various parameters. Plotted in the middle are the point-wise errors of the corresponding GPT-PINN solution.
Figure 8: Burgers’ Equation: First three full PINN solutions found by the GPT-PINN that are used as the activation functions.
Figure 7: Burgers’ Equation training: The adaptively chosen parameter values (Top), worst-case GPT-PINN training losses (Bottom Left), and the Box and Whisker plot of all GPT-PINN training losses (Bottom Right) during the outer-layer greedy training.
shows the first three PINN solutions adaptively selected by GPT-PINN. We observe behavior that is similar to the Klein-Gordon case and consistent with typical RBM results. The adaptive "learned neurons" again perform 3 to 4 times better than the non-adaptive "uniform neurons" which already perform reasonably well, underscoring the power of our novel idea of using pre-trained PINNs as activation functions.
Next, we test the GPT-PINN on 25 parameter values. Figure 9 displays the largest error for each size of the GPT-PINN. The trend is again exponential. Finally, to show the efficiency of the method, we plot in Figure 9 Right the cumulative run-time when both the full PINN and the (reduced) GPT-PINN are repeatedly called. It is clear that the GPT-PINN line increases very slowly (a relative speed of 0.009 in comparison to the full PINN) and that it is worthwhile to invest in GPT-PINN for a very modest number (14) of queries. We again show the training losses as
Figure 10: Burgers’ Equation: Full PINN training loss (Left) and GTP-PINN training loss (Right) as functions of epochs for various parameters. Plotted in the middle are the point-wise errors of the corresponding GPT-PINN solution.
Figure 9: Burgers’ Equation testing: Worst-case test error of the GPT-PINN of various sizes (Left), Box and Whisker plot of all adaptive GPT-PINN testing errors (Middle), and cumulative run time of the full PINN versus the GPT-PINN (Right).
functions of epochs in Figure 10 for both the full PINN and GPT-PINN. We note again that the GPT-PINN loss decreases more smoothly than the full PINN. This result also verifies the efficacy of our initialization strategy since the starting loss of the GPT-PINN is already very low.
### The parametric Allen-Cahn Equation
Finally, we test the Allen-Cahn equation parameterized by \((\lambda,\epsilon)\in[0.0001,0.001]\times[1,5]\)
\[\begin{split} u_{t}-\lambda u_{xx}+\epsilon(u^{3}-u)& =0,\quad(x,t)\in[-1,1]\times[0,1]\\ u(-1,t)=u(1,t)&=-1\\ u(x,0)&=x^{2}\cos{(\pi x)}.\end{split} \tag{12}\]
The SA-PINN [22] is a \([2,128,128,128,128,1]\)-fully connected network with activation function \(\tanh(z)\) that is trained on data distributed by a Latin hypercube sampling with \(|\mathcal{C}_{o}|=20,000\), \(|\mathcal{C}_{\partial}|=100\), \(|\mathcal{C}_{i}|=512\). A learning rate of \(0.005\) with \(10,000\) epochs of ADAM optimization followed by \(10,000\) epochs of L-BFGS optimization with a learning rate of \(0.8\) is used. The parameter training set is a grid of size \(121\) uniform parameter values. Up to \(9\) neurons are generated by the greedy algorithm producing the reduced GPT-PINNs of sizes \([2,1,1]\) to \([2,9,1]\). The GPT-PINNs are trained at the same set of training points as the SA-PINN but with a learning rate of \(0.0025\) and \(2000\) epochs.
Figure 11: Allen-Cahn Equation training: The chosen parameter values (Left), worst-case GPT-PINN training losses (Middle), and the Box and Whisker plot of all GPT-PINN training losses (Right) during the outer-layer greedy training
Figure 12: Allen-Cahn Equation: First three SA-PINN solutions found by the GPT-PINN that are used as the activation functions.
The GPT-PINN generates 9 neurons, i.e. SA-PINNs at \(\{(\epsilon_{i},\lambda_{i})\}_{i=1}^{9}\). These parameter values and the worse-case offline training loss \(\mathcal{L}_{\text{PINN}}^{\text{GPT}}(\mathbf{c}(\boldsymbol{\mu}))\) after 2000 epochs as we increase the number of neurons (i.e. size of \(\mathbf{c}(\boldsymbol{\mu})\)) in the hidden layer of GPT-PINN are shown in Figure 11. Figure 12 shows the first three PINN solutions adaptively selected by GPT-PINN.
Next, we test the GPT-PINN on 25 parameter values. Figure 13 displays the largest error for each size of the GPT-PINN and the cumulative run-time when both the SA-PINN and the GPT-PINN are repeatedly called. It is clear that the GPT-PINN line increases very slowly (at a relatively speed of 0.0006) and that it is again worthwhile to invest in GPT-PINN for a very modest number (11-12) of queries. We show the training losses as functions of epochs in Figure 14 for both the SA-PINN and GPT-PINN, with the latter again decaying more smoothly.
Figure 14: Allen-Cahn Equation: SA-PINN training loss (Left) and GTP-PINN training loss (Right) as functions of epochs for various parameters. Plotted in the middle are the point-wise errors of the corresponding GPT-PINN solution.
Figure 13: Allen-Cahn Equation testing: Worst-case test error of the GPT-PINN of various sizes (Left), Box and Whisker plot of all (Middle), and cumulative run time of the full PINN versus the GPT-PINN (Right)
Conclusion
The proposed Generative Pre-Trained PINN (GPT-PINN) is shown to mitigate two challenges faced by PINNs in the setting of parametric PDEs, namely the cost of training and over-parameterization. Being a hyper-reduced network with activation functions pre-trained full PINNs, GPT-PINN represents a brand-new meta-learning paradigm for parametric systems. With two main novelties, the design of network architecture including its special activation functions and the adoption of the training loss of the meta-network as an error indicator, and via tests on three differential families of parametric equations, we have shown that encompassing a very small number of well-chosen networks can generate surrogate PINNs across the entire parameter domain accurately and efficiently.
## Appendix A Detailed gradient of loss function for the Klein-Gordon case GPT-PINN
With the GPT-PINN formulation and considering the types of boundary and initial conditions for the equation given by eq. (10), the loss function eq. (6) becomes
\[\mathcal{L}_{\text{PINN}}^{\text{GPTN}}(\mathbf{c}(\boldsymbol{ \mu}))=\frac{1}{|\mathcal{C}_{o}^{r}|}\sum_{(\mathbf{x},t)\in\mathcal{C}_{o}^{r }}\left\|\frac{\partial^{2}}{\partial t^{2}}\left(\sum_{i=1}^{n}c_{i}( \boldsymbol{\mu})\Psi_{\text{NN}}^{\theta^{i}}\right)(\mathbf{x},t)+\alpha \frac{\partial^{2}}{\partial x^{2}}\left(\sum_{i=1}^{n}c_{i}(\boldsymbol{\mu}) \Psi_{\text{NN}}^{\theta^{i}}\right)(\mathbf{x},t)+\right.\] \[\left.\beta\left(\sum_{i=1}^{n}c_{i}(\boldsymbol{\mu})\Psi_{ \text{NN}}^{\theta^{i}}\right)(\mathbf{x},t)+\gamma\left(\sum_{i=1}^{n}c_{i}( \boldsymbol{\mu})\Psi_{\text{NN}}^{\theta^{i}}\right)^{2}(\mathbf{x},t)+ \mathbf{x}\cos{(t)}-\mathbf{x}^{2}\cos^{2}{(t)}\right\|_{2}^{2}\] \[+\frac{1}{|\mathcal{C}_{o}^{r}|}\sum_{(\mathbf{x},t)\in\mathcal{C }_{o}^{r}}\left\|\sum_{i=1}^{n}c_{i}(\boldsymbol{\mu})\Psi_{\text{NN}}^{\theta ^{i}}(\mathbf{x},t)-u(\mathbf{x},t)\right\|_{2}^{2}\] \[+\frac{1}{|\mathcal{C}_{i}^{r}|}\sum_{\mathbf{x}\in\mathcal{C}_{ i}^{r}}\left\|\sum_{i=1}^{n}c_{i}(\boldsymbol{\mu})\Psi_{\text{NN}}^{\theta^{i}}( \mathbf{x},0)-u(\mathbf{x},0)\right\|_{2}^{2}+\frac{1}{|\mathcal{C}_{i}^{r}|} \sum_{\mathbf{x}\in\mathcal{C}_{i}^{r}}\left\|\frac{\partial}{\partial t} \left(\sum_{i=1}^{n}c_{i}(\boldsymbol{\mu})\Psi_{\text{NN}}^{\theta^{i}} \right)(\mathbf{x},0)-u_{t}(\mathbf{x},0)\right\|_{2}^{2}\]
with given \(u(\mathbf{x},t)\) for \((\mathbf{x},t)\in\mathcal{C}_{o}^{r}\) and \(u(\mathbf{x},0)\) and \(u_{t}(\mathbf{x},0)\) when \(\mathbf{x}\in\mathcal{C}_{i}^{r}\). The \(m^{\text{th}}\) component of \(\nabla_{\mathbf{c}}\mathcal{L}_{\text{PINN}}^{\text{GPTN}}(\mathbf{c})\) needed for the GPT-PINN training eq. (7) then reads:
\[\frac{\partial\mathcal{L}_{\text{PINN}}^{\text{GPTN}}(\mathbf{c})}{ \partial c_{m}} =\frac{2}{|\mathcal{C}_{o}^{r}|}\sum_{(\mathbf{x},t)\in\mathcal{C}_{o}^{r}} \left(\bigg{(}\sum_{i=1}^{n}\left(c_{i}P_{tt}^{i}+\alpha c_{i}P_{xx}^{i}+ \beta c_{i}P^{i}\right)+\gamma\big{(}\sum_{i=1}^{n}c_{i}P^{i}\big{)}^{2}+ \mathbf{x}\cos{(t)}-\mathbf{x}^{2}\cos^{2}{(t)}\right)\] \[\cdot\bigg{(}P_{tt}^{m}+\alpha P_{xx}^{m}+\beta P^{m}+2\gamma \big{(}\sum_{i=1}^{k}c_{i}P^{i}\big{)}P^{m}\bigg{)}\bigg{)}+\frac{2}{|\mathcal{ C}_{o}^{r}|}\sum_{(\mathbf{x},t)\in\mathcal{C}_{o}^{r}}\bigg{(}\bigg{(}\sum_{i=1}^{n}c _{i}P^{i}-u(\mathbf{x},t)\bigg{)}P^{m}\bigg{)}\] \[+\frac{2}{|\mathcal{C}_{i}^{r}|}\sum_{\mathbf{x}\in\mathcal{C}_{i }^{r}}\left(\bigg{(}\sum_{i=1}^{n}c_{i}P^{i}-u(\mathbf{x},0)\bigg{)}P^{m} \right)+\frac{2}{|\mathcal{C}_{i}^{r}|}\sum_{\mathbf{x}\in\mathcal{C}_{i}^{r}} \bigg{(}\bigg{(}\sum_{i=1}^{n}c_{i}P_{t}^{i}-u_{t}(\mathbf{x},0)\bigg{)}P_{t} ^{m}\bigg{)}\]
for \(m=1,\ldots,n\). Here, for shortness of notation, we denote \(\Psi_{\text{NN}}^{\theta^{i}}(\mathbf{x},t)\) by \(P^{i}(\mathbf{x},t)\) and omit \((\mathbf{x},t)\). For every full PINN \(P^{i}\) identified by GPT-PINN, we would then just need to store the values of
\[P^{i}(\mathcal{C}_{o}^{r}\cup\mathcal{C}_{o}^{r}\cup(\mathcal{C}_{i}^{r}\times \{0\})),\quad P_{xx}^{i}(\mathcal{C}_{o}^{r}),\quad P_{tt}^{i}(\mathcal{C}_{o} ^{r}),\quad P_{t}^{i}(\mathcal{C}_{i}^{r}\times\{0\})\]
for efficient online GPT-PINN training step of eq. (7). |
2303.11733 | DIPPM: a Deep Learning Inference Performance Predictive Model using
Graph Neural Networks | Deep Learning (DL) has developed to become a corner-stone in many everyday
applications that we are now relying on. However, making sure that the DL model
uses the underlying hardware efficiently takes a lot of effort. Knowledge about
inference characteristics can help to find the right match so that enough
resources are given to the model, but not too much. We have developed a DL
Inference Performance Predictive Model (DIPPM) that predicts the inference
latency, energy, and memory usage of a given input DL model on the NVIDIA A100
GPU. We also devised an algorithm to suggest the appropriate A100
Multi-Instance GPU profile from the output of DIPPM. We developed a methodology
to convert DL models expressed in multiple frameworks to a generalized graph
structure that is used in DIPPM. It means DIPPM can parse input DL models from
various frameworks. Our DIPPM can be used not only helps to find suitable
hardware configurations but also helps to perform rapid design-space
exploration for the inference performance of a model. We constructed a graph
multi-regression dataset consisting of 10,508 different DL models to train and
evaluate the performance of DIPPM, and reached a resulting Mean Absolute
Percentage Error (MAPE) as low as 1.9%. | Karthick Panner Selvam, Mats Brorsson | 2023-03-21T10:43:41Z | http://arxiv.org/abs/2303.11733v1 | # DIPPM: a Deep Learning Inference Performance Predictive Model using Graph Neural Networks
###### Abstract
Deep Learning (DL) has developed to become a corner-stone in many everyday applications that we are now relying on. However, making sure that the DL model uses the underlying hardware efficiently takes a lot of effort. Knowledge about inference characteristics can help to find the right match so that enough resources are given to the model, but not too much. We have developed a DL Inference Performance Predictive Model (DIPPM) that predicts the inference _latency_, _energy_, and _memory usage_ of a given input DL model on the NVIDIA A100 GPU. We also devised an algorithm to suggest the appropriate A100 Multi-Instance GPU profile from the output of DIPPM. We developed a methodology to convert DL models expressed in multiple frameworks to a generalized graph structure that is used in DIPPM. It means DIPPM can parse input DL models from various frameworks. Our DIPPM can be used not only helps to find suitable hardware configurations but also helps to perform rapid design-space exploration for the inference performance of a model. We constructed a graph multi-regression dataset consisting of 10,508 different DL models to train and evaluate the performance of DIPPM, and reached a resulting Mean Absolute Percentage Error (MAPE) as low as 1.9%.
Keywords:Performance Prediction Multi Instance GPU Deep Learning Inference
## 1 Introduction
Many important tasks a now relying on Deep learning models, for instance in computer vision and natural language processing domains [3, 13]. In recent years, researchers have focused on improving the efficiency of deep learning models to reduce the computation cost, energy consumption and increase the throughput of them without losing their accuracy. At the same time, hardware manufacturers like NVIDIA increase their computing power. For example, the NVIDIA A1001 GPU half-precision Tensor Core can perform matrix operations at 312 TFLOPS. But all deep learning models will not fully utilize the GPU because the workload and number of matrix operations will vary according to the problem domain.
For this reason, NVIDIA created the Multi-Instance GPU (MIG\({}^{2}\)) technology starting from the Ampere architecture; they split the single physical GPU into multi-isolated GPU instances, so multiple applications can simultaneously run on different partitions of the same GPU, which then can be used more efficiently.
However, determining the DL model's efficiency on a GPU is not straightforward. If we could predict parameters such as inference latency, energy consumption, and memory usage, we would not need to measure them on deployed models which is a tedious and costly process. The predicted parameters could then also support efficient Neural Architecture Search (NAS) [5], efficient DL model design during development, and avoid job scheduling failures in data centers. According to Gao et al. [6], most failed deep learning jobs in data centers are due to out-of-memory errors.
In order to meet this need, we have developed a novel _Deep Learning Inference Performance Predictive Model_ (DIPPM) to support DL model developers in matching their models to the underlying hardware for inference. As shown in figure 1, DIPPM takes a deep learning model expressed in any of the frameworks: PyTorch, PaddlePaddle, Tensorflow, or ONNX, and will predict the latency (ms), energy (J), memory requirement (MB), and MIG profile for inference on an Nvidia A100 GPU without running on it. At the moment, the model is restricted to inference and the Nvidia A100 architecture, but we aim to relax these restrictions in future work. As far as we are aware, this is the first predictive model that can take input from any of the mentioned frameworks and to predict all of the metrics above.
Our contributions include the following:
* We have developed, trained and evaluated a performance predictive model which predicts inference latency, energy, memory, and MIG profile for A100 GPU with high accuracy.
* We have developed a methodology to convert deep learning models from various deep learning frameworks into generalized graph structures for graph learning tasks in our performance predictive model.
Figure 1: DIPPM can predict the Latency, Energy, Memory requirement, and MIG Profile for inference on an NVIDIA A100 GPU without actually running on it.
* We have devised an algorithm to suggest the MIG profile from predicted Memory for the given input DL model.
* We have created an open-sourced performance predictive model dataset containing 10,508 graphs for graph-level multi-regression problems.
Next, we discuss our work in relation to previous work in this area before presenting our methodology, experiments, and results.
## 2 Related Work
Performance prediction of deep learning models on modern architecture is a rather new research field being attended to only since a couple of years back. Bouhali et al. [2] and Lu et al. [14] have carried out similar studies where a classical Multi-Layer Perceptron (MLP) is used to predict the inference latency of a given input DL model. Their approach was to collect high-level DL model features such as batch size, number of layers, and the total number of floating point operations (FLOPS) needed. They then fed these features into an MLP regressor as input to predict the latency of the given model. Bai et al. [1] used the same MLP method but predicted both the latency and memory. However, the classical MLP approach did not work very well due to the inability to capture a detailed view of the given input DL model.
To solve the above problems, some researchers came up with a kernel additive method; they predict each kernel operation, such as convolution, dense, and LSTM, individually and sum up all kernel values to predict the overall performance of the DL model [8, 15, 18, 20, 22, 24]. Yu et al. [23] used the wave-scaling technique to predict the inference latency of the DL model on GPU, but this technique requires access to a GPU in order to make the prediction.
Kaufman et al. and Dudziak et al. [9, 4] used graph learning instead of MLP to predict each kernel value. Still, they used the kernel additive method for inference latency prediction. However, this kernel additive method did not capture the overall network topology of the model, and instead it will affect the accuracy of the prediction. To solve the above problem, Liu et al. [12] used a Graph level task to generalize the entire DL model into node embeddings and predicted the inference latency of the given DL model. However, they did not predict other parameters, such as memory usage and energy consumption.
Li et al. [11] tried to predict the MIG profiles on A100 GPU for the DL models. However, their methodology is not straightforward; they used CUDA Multi-Process Service (MPS) values to predict the MIG, So the model must run at least on the target hardware once to predict the MIG Profile.
Most of the previous research work concentrated on parsing the input DL model from only one of the following frameworks (PyTorch, TensorFlow, PaddlePaddle, ONNX). As far as we are aware, none of the previous performance prediction models predicted Memory usage, Latency, Energy, and MIG profile simultaneously.
Our novel Deep Learning Inference Performance Predictive Model (DIPPM) fills a gap in previous work; a detailed comparison is shown in Table 1. DIPPM
takes a deep learning model as input from various deep learning frameworks such as PyTorch, PaddlePaddle, TensorFlow, or ONNX and converts it to generalize graph with node features. We used a graph neural network and MIG predictor to predict the inference latency (ms), energy (J), memory (MB), and MIG profile for A100 GPU without actually running on it.
## 3 Methodology
The architecture of DIPPM consists of five main components: Relay Parser, Node Feature Generator, Static Feature Generator, Performance Model Graph Network Structure (PMGNS), and MIG Predictor, as shown in Fig. 2. We will explain each component individually in this section.
### Relay Parser
The Relay Parser takes as input a DL model expressed in one of several supported DL frameworks, converts it to an Intermediate Representation (IR), and passes this IR into the Node Feature Generator and the Static Feature Generator components.
Most of the previously proposed performance models are able to parse the given input DL model from a single DL framework, not from several, as we already discussed in Section 2. To enable the use of multiple frameworks, we used a relay, which is a high-level IR for DL models [16]. It has been used to
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline
**Related Works** & **A100** & **MIG** & **GNN\({}^{a}\)** & **Multi-SF\({}^{b}\)** & **Latency** & **Power** & **Memory** \\ \hline Ours (**DIPPM**) & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline Bai et al. [1] & - & - & - & - & ✓ & - & ✓ \\ \hline Bouhali et al. [2] & - & - & - & - & ✓ & - & - \\ \hline Dudziak et al. [4] & - & - & ✓ & - & ✓ & - & - \\ \hline Justus et al. [8] & - & - & - & - & ✓ & - & - \\ \hline Kaufman et al. [9] & - & - & ✓ & - & ✓ & - & - \\ \hline Li et al. [11] & ✓ & ✓ & - & - & - & - & - \\ \hline Liu et al. [12] & - & - & ✓ & - & ✓ & - & - \\ \hline Lu et al. [14] & - & - & - & - & ✓ & ✓ & ✓ \\ \hline Qi et al. [15] & - & - & - & - & ✓ & - & - \\ \hline Sponner et al. [18] & ✓ & - & - & - & ✓ & ✓ & ✓ \\ \hline Wang et al. [20] & - & - & - & - & ✓ & - & - \\ \hline Yang et al. [22] & - & - & - & - & ✓ & - & - \\ \hline Yu et. al. [23] & ✓ & - & - & - & ✓ & - & - \\ \hline Zhang et al. [24] & - & - & - & - & ✓ & - & - \\ \hline \end{tabular} \({}^{a}\) Using Graph Neural Network for performance prediction
\({}^{b}\) Able to parse DL model expressed in Multiple DL Software Framework
\end{table}
Table 1: Related Work comparison
compile DL models for inference in the TVM3 framework. We are inspired by the way they convert the DL model from various DL frameworks into a high-level IR format and therefore used their technique in our DIPPM architecture. It allows parsing given input DL models from various frameworks, but we have chosen to limit ourselves to PyTorch, TensorFlow, ONNX, and PaddlePaddle. We pass this DL IR to the subsequent components in our DIPPM architecture.
Footnote 3: [https://tvm.apache.org/](https://tvm.apache.org/)
### Node Feature Generator
The Node Feature Generator (NFG) converts the DL IR into an Adjacency Matrix (\(\mathcal{A}\)) and a Node feature matrix (\(\mathcal{X}\)) and passes this data to the PMGNS component.
The NFG takes the IR from the relay parser component. The IR is itself a computational data flow graph containing more information than needed for our performance prediction. Therefore we filter and pre-process the graph by post-order graph traversal to collect necessary node information. The nodes in the IR contain useful features such as operator name, attributes, and output shape of the operator, which after this first filtering step are converted into a suitable data format for our performance prediction. In the subsequent step, we loop through the nodes and, for each operator node, generate node features with a fixed length of 32.
Figure 2: Overview of DIPPM Architecture
The central part of the NFG is to generate an **Adjacency Matrix**\((\mathcal{A})\) and a **Node feature matrix**\((\mathcal{X})\) as expressed in algorithm 1. \(\mathcal{X}\) has the shape of \([N_{op},N_{features}]\), where \(N_{op}\) is the number of operator nodes in the IR and \(N_{features}\) is the number of features. In order to create node features \(\mathcal{F}_{n}\) for each \(node\), first, we need to encode the node operator name into a one hot encoding as can be seen on line 6 in algorithm 1. Then extract the node attributes \(\mathcal{F}_{attr}\) and output shape \(\mathcal{F}_{shape}\) into vectors. Finally, perform vector concatenation to generate \(\mathcal{F}_{n}\) for a node. We repeat this operation for each node and create the \(\mathcal{G}\). From the \(\mathcal{G}\), we extract \(\mathcal{A}\), \(\mathcal{X}\) that are passed to the main part of our model, the Performance Model Graph Network Structure.
```
1:functionCreateGraph(\(IR\))\(\triangleright\) IR from Relay Parser Component
2:\(\mathcal{N}\gets filter\_and\_preprocess(IR)\)
3:\(\mathcal{G}\leftarrow\emptyset\)\(\triangleright\) Create empty directed graph
4:for each \(node\in\mathcal{N}\)do\(\triangleright\) where \(node\) is node in node_list \(\mathcal{N}\)
5:if\(node.op\in[\)operators\(]\)then\(\triangleright\) Check node is a operator
6:\(\mathcal{F}_{oh}\gets one\_hot\_encoder(node.op)\)
7:\(\mathcal{F}_{attr}\gets ExtractAttributes(node)\)
8:\(\mathcal{F}_{shape}\gets ExtractOutshape(node)\)
9:\(\mathcal{F}_{node}\leftarrow\mathcal{F}_{oh}\oplus\mathcal{F}_{attr}\oplus \mathcal{F}_{shape}\)
10:\(\mathcal{G}.add\_node(node.parent,node.id,\mathcal{F}_{node})\)\(\triangleright\) Nodes are added in sequence
11:endif
12:endfor
13:\(\mathcal{A}\gets GetAdjacencyMatrix(\mathcal{G})\)
14:\(\mathcal{X}\gets GetNodeFeatureMatrix(\mathcal{G})\)
15:return\(\mathcal{A},\mathcal{X}\)
16:endfunction
```
**Algorithm 1** Algorithm to convert DL model IR into a graph with node features
### Static Feature Generator
The Static Feature Generator (SFG) takes the IR from the relay parser component and generates static features \(\mathcal{F}_{s}\) for a given DL model and passes them into the graph network structure.
For this experiment, we limited ourselves to five static features. First, we calculate the \(\mathcal{F}_{mac}\) total multiply-accumulate (MACs) of the given DL model. We used the TVM relay analysis API to calculate total MACs, but it is limited to calculating MACs for the following operators (in TVM notation): Conv2D, Conv2D transpose, dense, and batch matmul. Then we calculate the total number of convolutions \(F_{Tconv}\), Dense \(F_{Tdense}\), and Relu \(F_{Trelu}\) operators from the IR. We included batch size \(F_{batch}\) as one of the static features because it gives the ability to predict values for various batch sizes of a given model. Finally,
we concatenate all the features into a vector \(\mathcal{F}_{s}\) as expressed in equation 1. The feature set \(\mathcal{F}_{s}\) is subsequently passed to the following graph network structure.
\[\mathcal{F}_{s}\leftarrow\mathcal{F}_{mac}\oplus\mathcal{F}_{batch}\oplus \mathcal{F}_{Tconv}\oplus\mathcal{F}_{Tdense}\oplus\mathcal{F}_{Trelu} \tag{1}\]
### Performance Model Graph Network Structure (PMGNS)
The PMGNS takes the node feature matrix (\(\mathcal{X}\)), the adjacency matrix (\(\mathcal{A}\)) from the Node Feature Generator component, and the feature set (\(\mathcal{F}_{s}\)) from the Static feature generator and predicts the given input DL model's memory, latency, and energy, as shown in Fig. 2.
The PMGNS must be trained before prediction, as explained in section 4. The core idea of the PMGNS is to generate the node embedding \(z\) from \(\mathcal{X}\) and \(\mathcal{A}\) and then to perform vector concatenation of \(z\) with \(\mathcal{F}_{s}\). Finally, we pass the concatenated vector into a Fully Connected layer for prediction, as shown in Fig. 2. In order to generate \(z\), we used the graphSAGE algorithm suggested by Hamilton et al. [7], because of its inductive node embedding, which means it can generate embedding for unseen nodes without pretraining.
We already discussed that we generate node features of each node in the section 3.2. The graphSAGE algorithm will convert node features into a node embedding \(z\) which is more amenable for model training. The PMGNS contains three sequential graphSAGE blocks and three sequential Fully connected (FC) blocks as shown in Fig. 2. At the end of the final graphSAGE block, we get the generalized node embedding of given \(\mathcal{X}\) and \(\mathcal{A}\), which we concatenate with \(\mathcal{F}_{s}\). Then we pass the concatenated vector into FC to predict the memory (MB), latency (ms), and energy (J).
### MIG Predictor
The MIG predictor takes the memory prediction from PMGNS and predicts the appropriate MIG profile for a given DL model, as shown in Fig. 2.
As mentioned in the introduction, the Multi-instance GPU (MIG) technology allows to split an A100 GPU into multiple instances so that multiple applications can use the GPU simultaneously. The different instances differ in their compute capability and, most importantly, in the maximum memory limit that is allowed to be used. The four MIG profiles of the A100 GPU that we consider here are: 1g.5gb, 2g.10gb, 3g.20gb, and 7g.40gb, where the number in front of "gb" denotes the maximum amount of memory in GB that the application can use on that instance. For example, the maximum memory limit of 1g.5gb is 5GB, and 7g.40gb is 40GB.
For a given input DL model, PMGNS predicts memory for 7g.40gb MIG profile, which is the full GPU. We found that this prediction can be used as a pessimistic value to guide the choice of MIG profile. Fig. 3 shows manual memory consumption measurements of the same DL model inference on different profiles. The results show no significant difference in the memory allocation of DL in the
different MIG profiles even though the consumption slightly increases with the capacity of the MIG profile. The memory consumption is always the highest when running on the 7g.40gb MIG profile.
As mentioned, PMGNS predicts memory for 7g.40gb, so we claim that predicted memory will be an upper bound. Then we perform a rule-based prediction to predict the MIG profile for the given input DL model, as shown in equation 2. Where \(\alpha\) is predicted memory from PMGNS.
\[\text{MIG}(\alpha)=\begin{cases}\text{1g.5gb,}\;\;\text{if}\;\;0gb<\alpha< \text{5gb}\\ \text{2g.10gb,}\;\text{if}\;\;\text{5gb}<\alpha<\text{10gb}\\ \text{3g.20gb,}\;\text{if}\;\;\text{10gb}<\alpha<\text{20gb}\\ \text{7g.40gb,}\;\;\text{if}\;\;\text{20gb}<\alpha<\text{40gb}\\ \text{None,}\;\;\;\;\text{otherwise}\end{cases} \tag{2}\]
## 4 Experiments & Results
### The DIPPM Dataset
We constructed a graph-level multi-regression dataset containing 10,508 DL models from different model families to train and evaluate our DIPPM. The dataset distribution is shown in Table 2. Why do we need to create our own dataset? To the best of our knowledge, the previous predictive performance model dataset doesn't capture memory consumption, inference latency, and energy consumption parameters for wide-range DL models on A100 GPU.
Our dataset consists of DL models represented in graph structure, as generated by the Relay parser described in section 3.1. Each data point consists of
Figure 3: MIG Profile comparison of three different DL models memory consumption on A100 GPU. We used batch size 16 for VGG16 and Densenet121 model and batch size 8 for Swin base model.
four variables: \(\mathcal{X}\), \(\mathcal{A}\), \(\mathcal{Y}\), and \(\mathcal{F}_{s}\), where \(\mathcal{X}\) and \(\mathcal{A}\) are the Node feature matrix and Adjacency Matrix, respectively, as discussed in section 3.2, and \(\mathcal{F}_{s}\) is the static features of the DL model as discussed in section 3.3. We used the Nvidia Management Library4 and the CUDA toolkit5 to measure the energy, memory, and inference latency of each given model in the dataset. For each model, we ran the inference five times to warm up the architecture and then the inference 30 times, and then took the arithmetic mean of those 30 values to derive the \(\mathcal{Y}\), where \(\mathcal{Y}\) consists of inference latency (ms), memory usage (MB), and energy (J) for a given DL on A100 GPU.
Footnote 4: [https://developer.nvidia.com/nvidia-management-library-nvml](https://developer.nvidia.com/nvidia-management-library-nvml)
Footnote 5: [https://developer.nvidia.com/cuda-toolkit](https://developer.nvidia.com/cuda-toolkit)
We used a full A100 40GB GPU, or it is equivalent to using 7g.40gb MIG profile to collect all the metrics.
### Environment setup
We used an HPC cluster at the Julich research centre in Germany called JUWELS Booster for our experiments6. It is equipped with 936 nodes, each with AMD EPYC 7402 processors, 2 sockets per node, 24 cores per socket, 512 GB DDR4-3200 RAM and 4 NVIDIA A100 Tensor Core GPUs with 40 GB HBM.
Footnote 6: [https://apps.fz-juelich.de/jsc/hps/juwels/booster-overview.html](https://apps.fz-juelich.de/jsc/hps/juwels/booster-overview.html)
The main software packages used in the experiments are: Python 3.10, CUDA 11.7 torch 1.13.1, torch-geometric 2.2.0, torch-scatter 2.1.0, and torch-sparse 0.6.16.
### Evaluation
The Performance Model Graph Network Structure is the main component in DIPPM, and we used the PyTorch geometric library to create our model, as
\begin{table}
\begin{tabular}{l|c|c} \hline
**Model Family** & **\# of Graphs** & **Percentage (\%)** \\ \hline \hline Efficientnet & 1729 & 16.45 \\ \hline Mnasnet & 1001 & 9.53 \\ \hline Mobilenet & 1591 & 15.14 \\ \hline Resnet & 1152 & 10.96 \\ \hline Vgg & 1536 & 14.62 \\ \hline Swin & 547 & 5.21 \\ \hline Vit & 520 & 4.95 \\ \hline Densenet & 768 & 7.31 \\ \hline Visformer & 768 & 7.31 \\ \hline Poolformer & 896 & 8.53 \\ \hline
**Total** & 10 508 & 100\% \\ \hline \end{tabular}
\end{table}
Table 2: DIPPM Graph dataset distribution
shown in Fig. 2. We split our constructed dataset into three parts randomly: training set 70%, validation set 15%, and a test set 15%.
In order to validate that graphSAGE performs better than other GNN algorithms and plain MLP, we compared graphSAGE with the following other algorithms:, GAT [19], GCN [10], GIN [21], and finally, plain MLP without GNN. Table 3 summarizes the settings used. The learning rate was determined using a learning rate finder as suggested by Smith [17]. The Huber loss function achieved a higher accuracy than mean square error, which is why we chose that one.
For the initial experiment, we trained for 10 epochs and used Mean Average Percentage Error (MAPE) as accuracy metric to validate DIPPM. A MAPE value close to zero indicates good performance on regression prediction. Table 4 shows that graphSAGE gives a lower MAPE value in all of training, validation, and test datasets. Without using a GNN, MLP gives 0.366 of MAPE. With graphSAGE, MAPE is 0.160 on the test dataset which is a significant improvement on a multi-regression problem. We conclude that graphSAGE outperforms other GNN algorithms, and MLP because of its inductive learning, as discussed in section 3.4.
After this encouraging result we increased the number of epochs for training our DIPPM with graphSAGE to increase the prediction accuracy. After 500 epochs, we attained MAPE of 0.041 on training and 0.023 on the validation dataset. In the end, we attained 1.9% MAPE on the test dataset. Some of the DIPPM predictions on the test dataset are shown in Fig. 4.
\begin{table}
\begin{tabular}{l|c c c} \hline
**Model** & **Training** & **Validation** & **Test** \\ \hline \hline GAT & 0.497 & 0.379 & 0.367 \\ \hline GCN & 0.212 & 0.178 & 0.175 \\ \hline GIN & 0.488 & 0.394 & 0.382 \\ \hline MLP & 0.371 & 0.387 & 0.366 \\ \hline
**(Ours) GraphSAGE** & **0.182** & **0.159** & **0.160** \\ \hline \end{tabular}
\end{table}
Table 4: Comparison with different GNN algorithms and MLP with graphSAGE, we trained all the models for 10 epochs and used Mean Average Percentage Error for validation. The results indicate that DIPPM with graphSAGE performs significantly better than other variants.
\begin{table}
\begin{tabular}{l|l} \hline
**Setting** & **Value** \\ \hline \hline Dataset partition & Train (70\%) / Validation (15\%) / Test (15\%) \\ \hline Nr hidden layers & 512 \\ \hline Dropout probability & 0.05 \\ \hline Optimizer & Adam \\ \hline Learning rate & \(2.754\cdot 10^{-5}\) \\ \hline Loss function & Huber \\ \hline \end{tabular}
\end{table}
Table 3: Settings in GNN comparison.
### Prediction of MIG Profiles
In order to verify the MIG profile prediction for a given DL model, we compared the actual MIG profile value with the predicted MIG profile from the DIPPM, as shown in table 5. To calculate the actual suitable MIG profile, we divide actual memory consumption by the maximum memory limit of the MIG profiles. The higher the value is, the more appropriate profile for the given DL model.
For example, the predicted memory consumption for densenet121 at batch size 8 is 2865 MB. The actual memory consumption for the 7g.40gb MIG profile is 3272 MB. You can see that our DIPPM correctly predicted the MIG profile 1g.5gb for densenet121.
It is interesting to note that the densent121 models are from our test dataset and the swin base patch4 model is not in our DIPPM dataset but a similar swin base model family was used to train DIPPM. The convnext models are completely unseen to our DIPPM, but it's still predicting the MIG profile correctly.
### DIPPM Usability aspects
DIPPM takes basic parameters like frameworks, model path, batch, and input size, and finally, device type. As of now, we only considered A100 GPU; we are working to extend DIPPM to various hardware platforms. With a simple python API call, DIPPM predicts memory, latency, energy, and MIG profile for the given model, as can be seen in Fig. 5.
Figure 4: Comparison of actual value with DIPPM predicted values on the test dataset. Results show that DIPPM predictions are close to the actual predictions.
## 5 Conclusion
We have developed a novel Deep Learning (DL) Inference Performance Predictive Model (DIPPM) to predict the inference latency, energy, and memory consumption of a given input DL model on an A100 GPU without running on it. Furthermore, We devised an algorithm to select the appropriate MIG profile from the memory consumption predicted by DIPPM.
The model includes a methodology to convert the DL model represented in various frameworks to a generalized graph structure for performance prediction. To the best of our knowledge, DIPPM can help to develop an efficient DL model to utilize the underlying GPU effectively. Furthermore, we constructed and open-sourced7 a multi-regression graph dataset containing 10,508 DL models for performance prediction. It can even be used to evaluate other graph-based multi-regression GNN algorithms. Finally, we achieved 1.89% MAPE on our dataset.
Figure 5: A sample code to use DIPPM for performance prediction of VGG16 DL model developed by PyTorch framework.
\begin{table}
\begin{tabular}{c|c|c c|c c c c} \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{
\begin{tabular}{c} **Batch** \\ **size** \\ \end{tabular} } & \multicolumn{2}{c|}{**Predicted**} & \multicolumn{5}{c}{**Actual**} \\ \cline{3-8} & & **MIG** & **Mem** & **Mem** & **1g.5gb** & **2g.10gb** & **3g.20gb** & **7g.40gb** \\ \hline \hline densenet121 & 8 & 1g.5gb & 2865 & 3272 & **58\%** & 30\% & 15\% & 8\% \\ \hline densenet121 & 32 & 2g.10gb & 5952 & 6294 & & **60\%** & 30\% & 16\% \\ \hline swin\_base\_patch4 & 2 & 1g.5gb & 2873 & 2944 & **52\%** & 27\% & 14\% & 7\% \\ \hline swin\_base\_patch4 & 16 & 2g.10gb & 6736 & 6156 & & **59\%** & 30\% & 15\% \\ \hline convnext\_base & 4 & 1g.5gb & 4771 & 1652 & **61\%** & 31\% & 16\% & 8\% \\ \hline convnext\_base & 128 & 7g.40gb & 26439 & 30996 & & & **77\%** \\ \hline \end{tabular}
\end{table}
Table 5: DIPPM MIG profile prediction for seen and unseen DL model architectures. (densenet*: seen, swin*: partially seen, convnext*: unseen).
## Acknowledgment
This work has been funded by EuroHPC JU under contract number 955513 and by the Luxembourg National Research Fund (FNR) under contract number 15092355.
|
2306.04286 | A Mask Free Neural Network for Monaural Speech Enhancement | In speech enhancement, the lack of clear structural characteristics in the
target speech phase requires the use of conservative and cumbersome network
frameworks. It seems difficult to achieve competitive performance using direct
methods and simple network architectures. However, we propose the MFNet, a
direct and simple network that can not only map speech but also map reverse
noise. This network is constructed by stacking global local former blocks
(GLFBs), which combine the advantages of Mobileblock for global processing and
Metaformer architecture for local interaction. Our experimental results
demonstrate that our network using mapping method outperforms masking methods,
and direct mapping of reverse noise is the optimal solution in strong noise
environments. In a horizontal comparison on the 2020 Deep Noise Suppression
(DNS) challenge test set without reverberation, to the best of our knowledge,
MFNet is the current state-of-the-art (SOTA) mapping model. | Liang Liu, Haixin Guan, Jinlong Ma, Wei Dai, Guangyong Wang, Shaowei Ding | 2023-06-07T09:39:07Z | http://arxiv.org/abs/2306.04286v1 | # a Mask Free Neural Network for Monaural Speech Enhancement
###### Abstract
In speech enhancement, the lack of clear structural characteristics in the target speech phase requires the use of conservative and cumbersome network frameworks. It seems difficult to achieve competitive performance using direct methods and simple network architectures. However, we propose the MFNet, a direct and simple network that can not only map speech but also map reverse noise. This network is constructed by stacking global local former blocks (GLFBs), which combine the advantages of Mobileblock for global processing and Metaformer architecture for local interaction. Our experimental results demonstrate that our network using mapping method outperforms masking methods, and direct mapping of reverse noise is the optimal solution in strong noise environments. In a horizontal comparison on the 2020 Deep Noise Suppression (DNS) challenge test set without reverberation, to the best of our knowledge, MFNet is the current state-of-the-art (SOTA) mapping model.
Liang LIU\({}^{1}\), Haixin GUAN\({}^{12}\), Jinlong MA\({}^{1}\), Wei DAI\({}^{1}\), Guangyong WANG\({}^{1}\), Shaowei DING\({}^{1}\)\({}^{1}\)Unisound AI Technology Co. Ltd, China
\({}^{2}\)University of Science and Technology of China, China liuliiang, guanhaixin, majinlong, daiwei, wangguangyong, dingshaowei@unisound.com
**Index Terms**: monaural speech enhancement, deep learning, mask-free
## 1 Introduction
With the development of deep learning, Speech enhancement (SE) techniques have achieved significant progress. Typically, those can be divided into two categories, time domain methods [1, 2] and T-F domain methods [3, 4]. Especially, the latter one have obtained better performance in DNS Challenge [5, 6, 7, 8], one of the most influential competitions in the field of SE. Therefore, the goal of this study is to design an effective T-F domain system for single channel speech enhancement.
In T-F domain speech enhancement methods, the direct learning of the T-F spectrum values (mapping method [9, 10]) and the learning of T-F masking masking (masking method [4, 11]) are two classic methods. Mapping the magnitude [9] or mapping the real and imaginary parts is a direct and radical approach, but it seems to be a difficult problem, so GCRN [10] requires two decoders to map the real and imaginary parts separately. The masking method simplifies the problem by starting from the prior of the noisy speech components. It is estimated either in the rectangular coordinate system DPCRN [12] FullSubnet [13] or in the polar coordinate system DPCRN [4] DCUNet [14]. On this basis, DeepFilterNet [15, 16] using nearby filtering and summation can slightly compensate for the theoretical defects of the masking method.
As development progresses, the work of combining the two (referred to as decoupling methods [17, 18, 19, 20]) seems to be increasingly popular. For example, PHASEN [17] decouples the task into magnitude masking and phase mapping, TaylorSENet [18] further generalize the decoupling method into two parts: magnitude estimation and complex estimation. CTSNet [19] attempts to decouple the mapping method, that is, first mapping the magnitude spectrum and then mapping the complex spectrum. Moreover, researchers have pushed the limits of complexity by incorporating multiple stages of the decoupling approach, each leveraging a large cascading network (referred to as cascading network [21, 22, 23, 24]). As a result, the total computation and number of parameters in the network grow exponentially with each additional stage. Although this approach can lead to improved performance and enable the network to learn more intricate features, it is crucial to consider the trade-off between performance and computational cost.
Through the above observations, we have identified some uncertainties and put forward hypotheses:
* As the current trend in single-channel SE, it appears challenging to attain competitive performance using straightforward techniques and basic network architectures.
* There is a contradiction between the research results and the past studies [14, 25] on which method performs better between masking and mapping. It seems that with the reasonable optimization of the network, the mapping method appears to be more direct and less aggressive.
* The decoupling method adopts a multi-step estimation strategy to solve the problem of phase estimation, which makes the overall process more complex. If the problem of phase estimation can be solved, the structure of the network will be greatly simplified.
According to review [26], all current training objectives can be collectively referred to as SA method. For example, the masking method can be expressed as \(L_{SA-masking}=||S-\hat{M}\cdot Y||\), the mapping method can be expressed as \(L_{SA-mapping}=||S_{r}-\hat{S}_{r}||+||S_{i}-\hat{S}_{i}||\), and the decoupling method can be expressed as \(L_{SA-decoupling}=||\,|S|-\hat{M}\cdot|Y||+||S_{r,i}-\hat{S}_{r,i}||\). Cascading method can be expressed as \(L_{SA-cascading}=L_{stage1}+L_{stage2}\). In the above equation, \(L\) represents the loss function, \(S\) represents the target speech signal, \(\hat{S}\) represents the predicted speech signal, \(\hat{M}\) represents the predicted mask, \(Y\) represents the noisy speech signal, The subscripts \(r\) and \(i\) represent the real and imaginary parts respectively. Both \(stage1\) and \(stage2\) can be represented using either masking, mapping, or cascading. By observation, we believe that the above expressions can be unified into a intuitive way, \(L_{SA-intuitive}=||S-\hat{S}||\). Based on this premise, we propose a simple single-stage neural network for speech enhancement that utilizes short-time discrete cosine transform (STDCT) [27] features and does not require a mask. This network has the following characteristics:
* We have designed an efficient and lightweight module
called GLFB, which is based on the structural features of MetaFormer architecture [28], MobileNet block [29], and design experience from NArNet [30]. The module prototype is based on MetaFormer, with global modeling accomplished using depth-wise separable convolution, gating mechanism, and channel attention mechanism. Local modeling is done by point convolution.
* Our network structure is simple, consisting of three modules: encoder, decoder, and bottleneck, each of which is composed of GLFB. The encoder utilizes small-sized convolution kernels for down-sampling, while the decoder employs pixel-shuffle method for up-sampling. We establish jump layer connections by direct summation.
* Our proposed network uses real-valued STDCT spectrum as its input features. Unlike STFT features, which require complex values, STDCT features are represented solely using real values, resulting in a more uniform representation. The network is designed to perform speech enhancement without learning a mask, making it capable of mapping both speech and reverse noise. We named the network MFNet.
Our experimental results demonstrate that our proposed network outperforms the masking approach when using the mapping approach. We also discovered an interesting result where our network achieves better performance in a strong noise environment when directly learning the reverse noise compared to mapping the speech. On the DNS 2020 test set without reverberation, our proposed model achieves a fairly competitive performance. Based on our current understanding, in mapping method, this model performs the best on the given test set.
The rest of this paper is organized as follows. Section 2 introduces the proposed method. Section 3 describes the experiments and results. Section 4 is a comprehensive conclusion.
## 2 Proposed method
### STDCT input feature
We have utilized the STDCT spectrum as our input feature, which is a real-valued transformation that preserves all the information present in the signal and contains implicit phase information[31, 32]. This eliminates the need for designing a complex neural network to estimate the explicit phase of the signal, which can be challenging and computationally expensive. Additionally, using the STDCT spectrum removes the necessity of estimating the complex mask that is required in some other audio processing techniques.
### Model architecture
We adopted a UNet-shaped network architecture because it is suitable for intensive prediction tasks at the T-F bins level. The network structure of our model is designed to be as parsimonious as possible and the modules are designed to be reusable, taking inspiration from the design concept of NArNet. The model has three parts: encoder, bottleneck layer, and decoder. From the perspective of the included base modules, the network as a whole contains only three modules: projection layer, GLFB, and sampling (down or up). The encoder contains the projection layer, GLFB, and down-sampling modules. The bottleneck layer contains only GLFB. The decoder contains the projection layer, GLFB, and up-sampling modules. The projection layer on the input side projects the STDCT features into high-dimensional space, keeping the size of the feature map unchanged but increasing the number of channels from one to the number of channels set by the model \(n\). The number of channels in the feature map is doubled for each down-sampling layer, and halved for each up-sampling layer, resulting in an overall number of channels for the model of \([n,2n,4n,8n,16n,8n,4n,2n,n]\). Notably, there is no activation function used in the entire network. The performance of the network is mainly determined by the stacking of GLFB. The features extracted from the encoder stage are added directly to the decoder stage instead of the usual concatenate practice, which reduces the number of parameters in the decoder stage by reducing the number of convolution kernel groups. The encoder, bottleneck, and decoder each have a number of blocks represented by \([d1,d2,d3,d4],[m],[u1,u2,u3,u4]\). The overall network structure is shown in Figure 1, and the text length remains similar.
### Down-sampling, up-sampling and projection layer
In UNet-shaped networks, down-sampling and up-sampling are typically accomplished through convolution and transpose convolution, respectively. However, some researchers have utilized larger convolution kernels to improve model performance, resulting in larger model parameters and increased computational effort. In contrast, the MFNet approach employs a convolution kernel size of 2 and a stride of 2 for down-sampling, while up-sampling is achieved using the pixel-shuffle operation to avoid the checkerboard grid effect that can occur with transposed convolution. The projection layer in MFNet uses a \(3\times 3\) convo
Figure 1: Architecture of the proposed MFNet
lution and is responsible for extracting features from a single-channel input, projecting them into high-dimensional features via input projection, and then projecting the features back into a single-channel output via output projection.
### Global local former block
The GLFB is a crucial module in MFNet. It draws design inspiration from the transformer architecture [33], which includes a multi-head attention module and a feed-forward network module. However, the vanilla self-attention mechanism suffers from quadratic computational complexity with the size of the feature map, making it unsuitable for mobile or resource-constrained devices. To tackle this problem, we adopt a modified MobileNet block as a replacement for the multi-head attention module, which is inspired by the research on Metaformer blocks. This approach solves the problem of complexity with token length dependence on \(O(n^{2})\). At the same time, this module has similar global and local modeling capabilities as transformer. The global modeling part is done by depth-wise separable convolution, simple gating mechanism and the channel attention mechanism, and the local modeling part is done by point convolution. The feed-forward network module is slightly modified by replacing the activate layer to gate layer. The details are shown in Figure 2.
The simple channel attention module used in our model is the same as the one used in the MobileNet Block. DWConv refers to depth-wise separable convolution, and Point Conv refers to point-wise convolution. In 2(c), the module includes four Point Convs. The first and third Point Conv double the number of input channels, while the second and fourth Point Conv maintain the same number of channels. The gate mechanism halves the number of channels.
### Loss function
We propose a loss function for MFNet. This loss function contains two components, the first one is the mean-square error (MSE) loss for absolute values of STDCT. This part is written as
\[Loss_{abs}=||\;|S_{STDCT}|-|\hat{S}_{STDCT}|\;||_{2}^{2}, \tag{1}\]
and the second one is the MSE loss for polar values. This part is written as
\[Loss_{polar}=||S_{STDCT}-\hat{S}_{STDCT}||_{2}^{2}. \tag{2}\]
A hyper-parameter \(\gamma\) is the weight to adjust the weight of absolute MSE and polar MSE contribution. The loss function of MFNet is written as
\[Loss_{MFNet}=\gamma\cdot Loss_{abs}+(1-\gamma)\cdot Loss_{polar}. \tag{3}\]
In the formula, \(S\) represents the target speech signal, \(\hat{S}\) represents the speech signal predicted by the network.
## 3 Experiments and results
### Datasets
In the experiment, we used data from two datasets.
**DNS-Challenge**. The Interspeech 2020 DNS-Challenge corpus [5] covers over 500 hours of clean clips by 2150 speakers and over 180 hours of noise clips. For model evaluation, it provides a non-blind validation set with two categories, namely with and without reverberation, and each includes 150 noisy-clean pairs. Following the scripts provided by the organizer. We generate 3000h data for training and the SNRs randomly range from -3dB to 15dB. To ensure fairness in the experiment, we used official scripts to generate data and did not use any data augmentation techniques.
**TIMIT and NOISEX-92**. The TIMIT [34] corpus is selected as another test clean speech, NOISEX-92[35], and the real-life record noise dataset as the test noise. We use the image source method to generate simulated RIRs as the test RIR set. The room size is set to 5mv4mx3.5m with T60 range is 0.1:0.1:0.5. The locations of the microphone and speaker are randomly in the room with the height range is 1m to 1.5m. We limit the distance of the mic and speaker to 0.2m to 3m. The SNRs are -9dB, -6dB, -3dB, 0dB, 3dB, 6dB, 9dB, 15dB. This test set is much larger and has a wider range of SNR compared to the DNS 2020 test set. The purpose is to test the generalization performance of our model and its performance at low SNRs, and furthermore to determine whether mapping speech or mapping reverse noise is needed in the mask-free approach.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Model** & **PESQ** & **STOI** & **SNR** \\ \hline DCTCRN [32] & 2.80 & 0.863 & 11.55 \\ Cascade DCTCRN & 2.83 & 0.867 & 11.59 \\ TaylorSENet [18] & 2.92 & 0.877 & 11.79 \\ Ours(Masking) & 3.02 & 0.902 & 13.62 \\ Ours(Mapping Speech) & 3.02 & 0.902 & 13.72 \\ Ours(Mapping Reverse Noise) & **3.05** & **0.904** & **13.93** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Ablation study of mask-free method
Figure 2: Architecture of the GLFB
### Model setup
All the waveforms are sampled at 16kHz. We use the square root of Hanning window of size 320 with the hop time of 10ms. The optimizer is AdamW. The initial learning rate is set to 0.0034. We used the learning strategy of cosine annealing combined with warmup to reach the maximum of the learning rate in the first 5 epochs. The number of channels of the network is 16. Hyper-parameter \(\gamma\) is 0.5. The numbers of blocks in encoder, bottleneck, and decoder are \([d1=1,d2=1,d3=8,d4=4],[m=6],[u1=1,u2=1,u3=1,u4=1]\). By effectively stacking GLFBs, our network is an asymmetric structure. Due to space limitation, we conclude that the experimental result is that the encoder is more important than the decoder, so in the encoder stage, we stack more GLFBs.
### Evaluation metrics
Multiple objective metrics are adopted, including narrow-band (NB) and wide-band (WB) perceptual evaluation speech quality (PESQ) for speech quality, short-time objective intelligibility (STOI) for intelligibility, and SI-SDR for speech distortion.
### Ablation study between mask and mask-free
In this study, we investigated the performance of between mask-based and mask-free methods. The DNS 2020 training set was used, and the synthesized TIMIT test set was used to evaluate the generalization performance of our model under low SNR conditions and unseen speakers. We compared our approach with DCTCRN [32], Cascade DCTCRN, and TaylorSENet. DCTCRN is a masking speech enhancement network that uses STDCT features and won second place in a DNS competition. TaylorSENet is a powerful decoupled-masking model. Additionally, we cascaded the DCTCRN model to enable comparison with our model and multi-stage models. To ensure fairness, all models were trained under the same training configuration. The results are presented in Table 1.
To clarify, the mask method involves connecting a sigmoid function to the network output and then taking the Hadamard product of the noisy STDCT feature with the sigmoid-activated features. Mapping speech involves directly treating the target speech as the learning target in the network output. In contrast, the mapping reverse noise method adds the network output feature to the noisy STDCT feature and then treats the target speech as the learning target. The experimental results indicate that our network achieves better results using the mapping method than the masking method, especially when mapping reverse noise. Furthermore, our network outperforms DCTCRN, Cascade DCTCRN, and TaylorSENet in PESQ, STOI, and SNR metrics.
Once a model has been reasonably trained and has undergone sufficient computation, the masking approach becomes too cautious and fails to fully utilize the model's capabilities. In contrast, the mapping method is less aggressive and appears to be a better fit for this particular model. Interestingly, we observed that in a highly noisy environment, the model performs better by directly learning to reverse the noise.
### Comparison with the state-of-the-art methods
We evaluated the proposed SE system on the Interspeech 2020 DNS-Challenge dataset to compare it with other models, and the results are presented in Table 2. Our MFNet model achieved outstanding performance with a computational volume of only 6.09 GMACs/s. We also conducted an (real time factor) RTF test on the Intel Xeon E5-2680 CPU and the result was 0.236. As few models using the mapping approach were tested on this dataset, we found that the best mapping model is CTSNet, which is a decoupled-based mapping model and an improved version of the TSCN [21] - the winner of the 2021 ICASSP DNS Challenge. CTSNet can be considered a strong competitive model for comparison. To demonstrate the performance of our model, we conducted a horizontal comparison with models from all other methods within a reasonable range of computational complexity for prediction, such as DCCRN, FullSubNet, TaylorSENet and FRCRN. The computational complexity of the FRCRN model is calculated based on our analysis of the website [https://modelscope.cn/models/damo/speech_frcrn_ans_cirm_16k/summary](https://modelscope.cn/models/damo/speech_frcrn_ans_cirm_16k/summary). Our proposed model is highly competitive among these recently proposed models. Our MFNet outperforms the current state-of-the-art mapping network CTSNet by a significant margin. We provide the processed samples, which are available at [https://github.com/ioyy900205/MFNet](https://github.com/ioyy900205/MFNet).
## 4 Conclusion
We present a novel neural network for speech enhancement, called MFNet, which directly learns the real-valued STDCT spectral mapping inspired by the intuitive definition of SA. Our network architecture consists of newly-designed lightweight GLFB modules stacked together to create a simple yet effective single-stage structure capable of modeling global and local information. Using the mapping method, our proposed framework outperforms the current SOTA mapping model on the DNS 2020 test set without reverberation. Overall, our experimental results show that MFNet exhibits superior performance compared to other SOTA models with various alternative approaches. This makes MFNet a promising candidate for practical applications in speech enhancement. In the future, we plan to transform the system into a causal model to facilitate real-world deployment.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline
**Model** & **Method** & **MACs(G/s)** & **WB-PESQ** & **NB-PESQ** & **STOI** & **SI-SDR(dB)** \\ \hline
**Noisy** & & & 1.58 & 2.45 & 91.52 & 9.07 \\
**DCCRN(2020) [4]** & masking & 11.13 & - & 3.27 & - & - \\
**FullSubNet(2021) [13]** & masking & 31.35 & 2.78 & 3.31 & 96.11 & 17.29 \\
**CTSNet(2021) [19]** & decoupled-based mapping & 5.57 & 2.94 & 3.42 & 96.21 & 16.69 \\
**TaylorSENet(2022) [18]** & decoupled-based masking & 6.14 & 3.22 & 3.59 & 97.36 & 19.15 \\
**FRCRN(2022) [23]** & cascading & 241.981 & 3.23 & 3.60 & 97.69 & 19.78 \\ \hline
**MFNet** & mapping & 6.09 & **3.43** & **3.74** & **97.98** & **20.31** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Experimental results on the DNS 2020 test set w/o reverberation |
2301.03228 | Graph Neural Networks for Aerodynamic Flow Reconstruction from Sparse
Sensing | Sensing the fluid flow around an arbitrary geometry entails extrapolating
from the physical quantities perceived at its surface in order to reconstruct
the features of the surrounding fluid. This is a challenging inverse problem,
yet one that if solved could have a significant impact on many engineering
applications. The exploitation of such an inverse logic has gained interest in
recent years with the advent of widely available cheap but capable MEMS-based
sensors. When combined with novel data-driven methods, these sensors may allow
for flow reconstruction around immersed structures, benefiting applications
such as unmanned airborne/underwater vehicle path planning or control and
structural health monitoring of wind turbine blades. In this work, we train
deep reversible Graph Neural Networks (GNNs) to perform flow sensing (flow
reconstruction) around two-dimensional aerodynamic shapes: airfoils. Motivated
by recent work, which has shown that GNNs can be powerful alternatives to
mesh-based forward physics simulators, we implement a Message-Passing Neural
Network to simultaneously reconstruct both the pressure and velocity fields
surrounding simulated airfoils based on their surface pressure distributions,
whilst additionally gathering useful farfield properties in the form of context
vectors. We generate a unique dataset of Computational Fluid Dynamics
simulations by simulating random, yet meaningful combinations of input boundary
conditions and airfoil shapes. We show that despite the challenges associated
with reconstructing the flow around arbitrary airfoil geometries in high
Reynolds turbulent inflow conditions, our framework is able to generalize well
to unseen cases. | Gregory Duthé, Imad Abdallah, Sarah Barber, Eleni Chatzi | 2023-01-09T09:50:14Z | http://arxiv.org/abs/2301.03228v1 | # Graph Neural Networks for Aerodynamic Flow Reconstruction from Sparse Sensing
###### Abstract
Sensing the fluid flow around an arbitrary geometry entails extrapolating from the physical quantities perceived at its surface in order to reconstruct the features of the surrounding fluid. This is a challenging inverse problem, yet one that if solved could have a significant impact on many engineering applications. The exploitation of such an inverse logic has gained interest in recent years with the advent of widely available cheap but capable MEMS-based sensors. When combined with novel data-driven methods, these sensors may allow for flow reconstruction around immersed structures, benefiting applications such as unmanned airborne/underwater vehicle path planning or control and structural health monitoring of wind turbine blades. In this work, we train deep reversible Graph Neural Networks (GNNs) to perform flow sensing (flow reconstruction) around two-dimensional aerodynamic shapes: airfoils. Motivated by recent work, which has shown that GNNs can be powerful alternatives to mesh-based forward physics simulators, we implement a Message-Passing Neural Network to simultaneously reconstruct both the pressure and velocity fields surrounding simulated airfoils based on their surface pressure distributions, whilst additionally gathering useful farfield properties in the form of context vectors. We generate a unique dataset of Computational Fluid Dynamics simulations by simulating random, yet meaningful combinations of input boundary conditions and airfoil shapes. We show that despite the challenges associated with reconstructing the flow around arbitrary airfoil geometries in high Reynolds turbulent inflow conditions, our framework is able to generalize well to unseen cases.
## 1 Introduction
Many engineering applications stand to benefit from the ability to sense and reconstruct fluid flow features from sparse measurements originating at a structure's surface. Flow sensing could be crucial for improvements in the accuracy and resilience of wind turbine and unmanned aircraft controllers. Another possible application is monitoring of wind loaded structures (Barber et al., 2022), where the use of cheap micro-electromechanical systems (MEMS) in combination with novel methods for flow sensing could lead to robust structural health monitoring solutions. In this work, we focus on common aerodynamic structures: we aim to reconstruct the flow around 2-D airfoils. Traditionally, computing the flow around an airfoil requires approaches from Computational Fluid Dynamics (CFD), which are forward-physics simulators. In CFD, the inflow, outflow and wall boundary conditions are set, and over many iterations a solution for the discretized Navier-Stokes PDEs is reached, which then yields a pressure distribution at the airfoil surface. We aim to solve the inverse problem: given only the pressure distribution at the airfoil surface, a solution for the flow field and farfield boundary conditions is to be found. Moreover, our aim is to do so for any airfoil geometry subject to a wide variety of turbulent inflows.
Adopting the notation of Erichson et al. (2020), the problem can be described in the following manner. An airfoil equipped with \(p\) distributed barometric sensors is placed in a steady flow of air, providing surface pressure measurements \(s\in\mathbb{R}^{p}\) at multiple locations around its perimeter. The sensors sample from the surrounding flow field \(x\in\mathbb{R}^{m}\) through a measurement operator \(H\):
\[s=H(x) \tag{1}\]
The goal is to construct an estimate of the flow field \(\hat{x}\) surrounding the airfoil, by learning from training data a function \(\mathcal{F}\) that approximates the highly nonlinear inverse measurement operator \(G\) such that:
\[\mathcal{F}(s)=\hat{x}\approx x=G(s) \tag{2}\]
Meshes are an extremely useful tool, indispensable in many engineering domains and especially in CFD. Contrary to Cartesian grid representations, mesh representations offer high flexibility for irregular geometries and allow for variable spatial density. This makes them ideal for discretizing complex physical problems, where one can balance the trade-off between numerical accuracy and computational efficiency in certain regions of interest. Furthermore, meshes can also be described in terms of nodes and edges, i.e. as a graph. In this context, the flow reconstruction problem can be described as follows. A graph \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{U})\) is constructed from an airfoil CFD mesh, with \(m\) fluid nodes \(\mathcal{V}_{f}\) and \(p\) airfoil boundary nodes \(\mathcal{V}_{a}\). The flow features of all \(\mathcal{V}_{f}\) are unknown, whilst the features of \(\mathcal{V}_{a}\) are known. Our aim is then to learn a graph operator \(\mathscr{F}\) that estimates the information at the fluid nodes using the information contained at the airfoil boundary nodes, the input graph-level attributes \(\mathcal{U}_{in}\) and the edges \(\mathcal{E}\):
\[\hat{\mathcal{V}}_{f}=\mathscr{F}(\mathcal{V}_{a},\mathcal{E},\mathcal{U}_{in}) \tag{3}\]
An ancillary goal is to estimate the global context \(\mathcal{U}\) of the graph, as this contains relevant information for applications. Figure 1 provides an description of the flow reconstruction problem in terms of graph learning.
From a geometric learning perspective, flow reconstruction is a challenging problem for several reasons. The first significant hurdle to overcome is the size of our graphs. We use meshes with high densities close to the airfoil in order to achieve good spatial resolution in these critical regions. Thus, our dataset contains graphs with a mean of around 55'000 nodes, which is an order of magnitude higher than previous mesh-graph learned simulation methods (Pfaff et al., 2020). Moreover, the input information is concentrated in a very localized domain of the graph: the airfoil nodes. It is difficult to propagate the necessary information to reconstruct nodes far from the airfoil with a shallow Graph Neural Network (GNN), meaning that deep GNN architectures with a large number of message-passing steps are required to push this 'information barrier' away from the airfoil nodes. However, deep GNNs go hand in hand with other issues such as large memory requirements, over-smoothing, and over-squashing.
In this work, we combine a number of existing graph-learning methods to tackle the aforementioned challenges. Our contributions may be summarized as follows:
* We combine unknown Feature Propagation (Rossi et al., 2021) with very deep Grouped Reversible GNNs (Li et al., 2021) to reconstruct flow features at the fluid nodes, whilst additionally gathering contextual farfield information.
Figure 1: Problem setup. We aim to learn the reconstruction operator \(\mathscr{F}\) which estimates properties of the fluid nodes as well as the graph context. This amounts to reconstructing a solution to the Navier-Stokes equations which satisfies the boundary conditions perceived at the airfoil surface.
* Generalization of 2D aerodynamic flow field learning with GNNs, including (1) arbitrary airfoil geometries, (2) arbitrary turbulent inflow conditions (flow velocity, turbulence intensity, and angle of attack), and (3) simultaneous flow field reconstruction and inference of contextual farfield flow information based only on sparse pressure data on the surface of the airfoil.
* Generation of a unique training dataset of OpenFOAM airfoil CFD simulations with many different geometries and inflow conditions which are parsed to graph structure and made publicly available.
* We gather qualitative and quantitative results on unseen airfoil and flow configurations, and perform a number of experiments to understand the limitations of this framework. In particular, we test and compare three different GNN layer architectures.
## 2 Related work
Machine learning methods have recently garnered interest in the fluid mechanics community (Brunton et al., 2020). Fluid-related problems are typically nonlinear, complex and generate large amounts of data, all of which are conditions under which deep learning approaches thrive. Specifically in the context of flow reconstruction from sparse measurements, several neural network approaches can be found in the literature. In an article by Erichson et al. (2020), a "Shallow Neural Network", i.e. a fully-connected network with only two hidden layers, was applied to estimate transient flows from sparse measurements. The authors trained the networks on a single specific geometrical flow configuration, for example the flow behind a cylinder, and then tested on the same configuration at different time steps. We aim to avoid this limitation as our goal is to estimate the flow around any airfoil geometry. The authors compare their findings against typically used proper orthogonal decomposition (POD) methods and note significant improvements in terms of the reconstruction error. This work was then extended to turbulent flow reconstruction around airfoils based on experimental data in Carter et al. (2021), where the results were compared to Particle Image Velocimetry (PIV) measurements. The viability of neural networks over other approaches was further confirmed in the work of Fukami et al. (2020), where multiple methods were pitted against each other to estimate the flow behind a cylinder and an airfoil. Again, here the models were trained and tested on a single flow configuration, while also being dependent on Cartesian geometrical inputs. In Ozbay and Laizet (2022), researchers attempt to avoid this limitation by utilizing Schwarz-Christofel mappings to sample the points at which the flow is reconstructed, thus rendering the method geometry invariant. The authors train multiple neural network architectures on a collection of transient flow simulations around randomly generated 2-D Bezier shapes at a predefined inflow Reynolds number. As inputs for the flow reconstruction, they use multiple pressure sensors on the shapes' surface as well as velocity probes in the wake. Their results indicate that, when compared to a Cartesian sampling strategy, a significant performance boost is achieved for all neural network types, especially in the vicinity of the immersed shape. While this work demonstrates robustness to various geometric configurations, it requires additional velocity sensors and is trained on a singular farfield boundary condition, both of which we aim to avoid and improve upon. Another method which avoids geometrical dependency is reported in Chen et al. (2021). In this work, to which our approach most closely relates to, the authors utilize a Graph Convolutional Network (GCN) (Kipf and Welling, 2016) on graphs constructed from CFD meshes of randomly generated Bezier shapes. The GCN is used to predict the flow around the shapes at a fixed laminar (Reynolds number of 10) inflow condition without using surface measurements. In our approach, we aim to reconstruct a wide variety of turbulent flows given only surface readings, a significantly less constrained problem. We also aim to characterize the global properties of the flow, similarly to Zhou et al. (2021). To our best knowledge, we are the first to attempt to simultaneously reconstruct the flow while estimating turbulent inflow parameters at large Reynolds numbers for arbitrary airfoil geometries.
The dataset that we generate to train our GNN model is similar in terms of the geometries, meshing and CFD pipeline to the work of Thuerey et al. (2020), the main differences being the chosen RANS model and the post-processing (graph parsing). Other datasets found in the literature focus only on the NACA family of airfoils (Schillaci et al., 2021).
Graph networks are based on the message-passing framework (Gilmer et al., 2017), where a nodes features are updated by aggregating messages emanating from its neighbors. Many different types of message-passing schemes can be constructed, with some using attention mechanisms (Velickovic et al., 2017) and others relying on strong theoretical backgrounds (Xu et al., 2018). Graph learning
methods are increasingly being applied to a wide variety of physics problems (Sanchez-Gonzalez et al., 2018, 2020). In Pfaff et al. (2020), the authors successfully demonstrate how GNNs can learn to replicate forward mesh-based physics simulators and are able to predict the evolution of a transient solution. Motivated by these results, our approach is constructed upon the the same basic Encoding-Process-Decode network structure. However, a key difference to note is that, contrary to the next-step prediction problem, flow reconstruction has to overcome high amounts of missing information, with known features being extremely localized. To address this hurdle, we turn to graph-based feature propagation methods (Rossi et al., 2021), which is closely related to matrix completion approaches (Monti et al., 2017). Feature propagation is an effective yet computationally inexpensive method for initializing graphs with missing features. We use this method as pre-processing step, through which graphs are passed before being fed into the rest of the GNN model.
Training GNNs for very large graphs is challenging, with typical approaches tending toward minimizing the number of learnable parameters so that the problem becomes tractable (Chen et al., 2020). This often results in relatively shallow GNNs, which could adversely influence the propagation of the information contained at the airfoil nodes through sufficient extents of the graph. Making use of subgraph sampling strategies (Hamilton et al., 2017) is another possible approach, one which also allows for larger/deeper GNNs. However, these methods are not applicable in our case, as we need to feed entire graphs in one pass due to the heterogeneity in information localization. Moreover, subgraph sampling would yield additional difficulties for our ancillary goal of predicting global graph properties for farfield estimation. Recent work by Li et al. (2021) has shown that it is possible to train very deep GNNs on large graphs by making use of Grouped Reversible layers, which reduces memory requirements at the cost of extra computation. This method forms the core of the processing block of the proposed flow reconstruction GNN. Another issue which traditionally characterizes training deep GNNs on large graphs is over-squashing (Alon and Yahav, 2020). Over-squashing is a by-product of a graph's structure, where bottlenecks and tree-like structures (Topping et al., 2021) can cause the latent representation of certain nodes to be overwhelmed by the amount of information needed to be stored. We make use of this information while parsing the simulation meshes into graphs.
## 3 Dataset generation
In this section we introduce the different elements of our data generation pipeline. In total we generate 1120 converged simulations, which are separated into train, validation and test datasets in a 80/10/10 split. Figure 2 depicts an illustration of this pipeline.
Geometry selection and meshingIn our dataset generation pipeline, airfoil shapes are drawn at random from the UUIC database of airfoils (Selig, 1996). Before passing the shape to the meshing algorithm, we carry out some additional interpolation and processing to make sure that the selected airfoil has a sufficient amount of points at the leading-edge as well as a properly defined trailing-edge. Then, we use Gmsh (Geuzaine and Remacle, 2020) to construct an unstructured O-grid type mesh around the selected airfoil. A sizing field is set close to the airfoil in order to make sure that meshes with appropriate y+ values for the CFD wall-functions are generated. An overall sizing parameter is
Figure 2: Illustration of the dataset generation pipeline. Airfoil shapes are selected at random from a database, then meshed and simulated in OpenFOAM with random feasible boundary conditions. In the last step, the finite-volume scheme is used to parse the simulation mesh into a graph.
also set for sufficient farfield density, but is adjusted to ensure that an acceptable amount of cells are created (\(<150^{\prime}000\)).
CFD simulationsEach mesh is associated with a different inflow configuration. Three parameters control the farfield conditions: angle of attack, inflow velocity and turbulence intensity. These parameters are drawn from probability distributions reflecting realistic atmospheric flows at Reynolds numbers ranging from \(2\cdot 10^{5}\) to \(6.5\cdot 10^{6}\), with a mean Reynolds number of around \(3\cdot 10^{6}\). This mean value is well beyond the typical laminar-to-turbulent transition threshold of around \(5\cdot 10^{5}\)(Incorpera et al., 1996), which greatly increases the difficulty of the flow reconstruction problem. The farfield conditions form the global context vectors of our graphs and are estimated at inference time. We simulate the flow around the airfoils using a steady 2-D Reynolds-Averaged Navier-Stokes (RANS) CFD solver with the OpenFOAM software package (Jasak et al., 2007). For turbulence modelling, we select the K-Omega SST model (Menter and Esch, 2001) along with the standard OpenFOAM wall-functions for boundary layer treatment. Only sufficiently converged CFD simulations with pressure, velocity and turbulence residuals below \(5\cdot 10^{-5}\) are kept.
Graph parsingContrary to many other mesh-based physics simulators, CFD solvers such as OpenFOAM are based on finite volume methods. It is a significant difference that should be reflected in the manner in which a mesh is converted into a graph. To do so, we use the cells themselves as the nodes, with bidirectional edges being formed between adjacent cells. This allows us to gather an additional edge feature which is relevant to the underlying physics. Specifically, the length (or surface for 3-D meshes) of the boundary between two cells is used as an edge feature. The benefit of this is twofold: a form of sizing is fed to the network and a quantity relevant to flux computation is set on the edges. For the nodes, we gather 4 types of features: pressure, x-velocity component, y-velocity component and node category (fluid, farfield, wall), the latter of which is inputted as a one-hot vector. The global context features are the farfield conditions (turbulence intensity, inflow velocity and angle of attack). To avoid unnecessary computational overhead, we do not parse the entire CFD domain, which has a radius of 100 airfoil chord lengths, into a graph. Instead we opt to only keep cells within a 1 chord circle centered on the airfoil. Furthermore, we set the airfoil nodes to be located at the meshed airfoil boundary and add bidirectional edges between adjacent airfoil nodes. These additional edges are created with the aim of avoiding tree-like structures in our graphs, as these could potentially cause bottlenecks for the learning process (Topping et al., 2021). The graphs of our dataset have on average around 55k nodes and around 85k individual edges.
## 4 Graph neural network framework
### Architecture
For our GNN architecture, we adopt the Encode-Process-Decode logic that is now popular for learning on graph-based physics problems (Sanchez-Gonzalez et al., 2020; Pfaff et al., 2020; Godwin et al., 2022), albeit with some notable modifications. Figure 3 shows an overview of the Flow Reconstruction GNN.
Input featuresIn the flow reconstruction problem, we assume that the global context of the graph is unknown, however some useful physical parameters describing the flow can be estimated. Using Bernoulli's principle, and given that all airfoils are simulated with a zero farfield static pressure, the farfield inflow velocity magnitude can be initially approximated as:
\[\hat{U}_{\infty}=\sqrt{\frac{2\cdot p_{0}}{\rho}} \tag{4}\]
where \(\rho\) is the density of air (constant throughout simulations) and \(p_{0}\) is the total pressure measured at the stagnation point, which can be estimated by taking the maximum pressure at the airfoil nodes \(p_{0}=\max(p_{\nu_{a}})\). While Bernoulli's principle is not valid for turbulent flows such as the ones we try to reconstruct, it serves as a good starting point for farfield velocity estimation. Another useful graph property that can be extracted is the normal force coefficient acting on the airfoil. While the lift coefficient is usually used to characterize airfoils, it cannot be calculated as the angle of attack is unknown (a quantity to be inferred from the learned model). Nevertheless, the normal coefficient is
directly related to the lift coefficient and brings additional physical information which may aid the network to reconstruct the flow. It can be estimated via the following equation:
\[\hat{C}_{n}=\frac{\sum\limits_{v_{i}\in\mathcal{V}_{n}}p_{v_{i}}\cdot l_{v_{i}} \cdot n_{y,v_{i}}}{p_{0}} \tag{5}\]
where \(l\) is the boundary length and \(n_{y}\) is the y component of the normal boundary vector, both of which are known properties for each mesh cell boundary. We therefore use \(\mathcal{U}_{in}=(\hat{U}_{\infty},\hat{C}_{n})\) as the two-dimensional input context vector.
For the nodes, we only have access to the pressure distribution at the surface of the airfoil, while it is set to 'NaN' values at the fluid nodes. The type of each node is known and is encoded as a one-hot vector, bringing the total number of input node features to four. To account for mesh geometry, the following four edge features are used as inputs: the x and y components of the relative edge direction vector, the edge length, and the cell boundary length value \(l\) (see Section 3).
Pre-processingBoth the input and target node features are normalized. To avoid biasing the normalization, all pressure features are normalized by the mean and standard deviation of the known airfoil surface pressure distribution, while both components of the velocity target features are normalized by the initial estimated farfield velocity \(\hat{U}_{\infty}\). We use Feature Propagation (Rossi et al., 2021) as a preliminary step before feeding a graph to our GNN. This step is an important part of our framework as it conditions the input graph into a plausible initial state. Essentially, the feature propagator radiates surface pressure information outwards. In most cases, we found 20 feature propagation iterations to be sufficient.
EncodingIn the encoding layer, ReLU activated MLPs with two hidden layers and LayerNorm are used to project the input features of the graph into latent vectors of size \(N\). This encoding layer differs to the standard GraphNet encoder (Sanchez-Gonzalez et al., 2020) in that the node encoder MLP takes as input the input node features as well as the latent global vector. We make this modification so that graph-level attributes are taken into account in the construction of the node latent variables, as this is not the case in the processing steps.
ProcessingFor the processing step, we opt to use a deep Grouped Reversible GNN (Li et al., 2021) with \(L\) message-passing layers. This architecture makes modifications to the typical GNN architecture by first splitting the input node feature matrix \(V\) across the feature dimension into \(C\) groups \(\langle V_{1},V_{2},...,V_{C}\rangle\), which are then processed into grouped outputs \(\langle V^{\prime}_{1},V^{\prime}_{2},...,V^{\prime}_{C}\rangle\) with a
Figure 3: Overview of the Flow Reconstruction GNN achitecture. Feature Propagation is used to initialize the unknown fluid nodes. The graph is then passed through the Encode-Process-Decode pipeline, to obtain the reconstructed flow. During the Process step, node features are updated via message-passing within a deep Grouped Reversible GNN.
Grouped Reversible GNN layer. These outputs are computed as follows:
\[\begin{split} V_{0}^{\prime}&=\sum_{k=2}^{C}V_{k}\\ V_{k}^{\prime}&=f_{wk}(V_{k-1}^{\prime},A,E)+V_{k}, \quad k\in\{1,\dots,C\}\end{split} \tag{6}\]
with \(A\) the adjacency matrix and \(E\) the edge feature matrix.
The Grouped Reversible framework allows for any type of message-passing architecture to be chosen for the GNN layer \(f_{wk}\). We choose to test three popular types of GNN layers: the Graph Attention Network (GAT) (Velickovic et al., 2017), the modified Graph Isomorphic Network (GIN) (Xu et al., 2018) which accounts for edge features (Hu et al., 2019), and the Generalized Aggregation Networks (GEN) Li et al. (2020) which modifies the standard GCN with different aggregation schemes while also utilizing edge features.
DecodingOnly the nodes and global context are decoded back into feature space, as the edges are not updated. Both decoding neural networks are MLPs with two hidden layers and ReLU activations without any output normalization. At the output of the decoder, we gather for each node the estimated pressure and velocity fields. The output context vector is composed of an updated version of the farfield velocity, as well as an estimation of the inflow angle (angle of attack) and of the turbulence intensity.
### Training
General aspectsOur models are trained on a dataset composed of 896 graphs. Models are trained with the Adam optimizer on a single Nvidia GPU with 10GB of VRAM. Due to the size and nature of the graphs, we can only use a batch size of one, albeit with random order shuffling occurring at each epoch. The learning rate is initially set at \(5\cdot 10^{-4}\) and is exponentially decayed by a factor of \(0.97\).
Loss functionWe use a multi-component loss function, in order to minimize both the node feature reconstruction error and the context vector prediction error, with \(L_{2}\) losses for both components. An additional loss component based on the velocity divergence was also tested but yielded too many artefacts and was therefore discarded. The overall loss is:
\[\mathcal{L}=L_{2}(\mathcal{V},\hat{\mathcal{V}})+\lambda\cdot L_{2}(\mathcal{U },\hat{\mathcal{U}}) \tag{7}\]
where \(\lambda\) is a hyperparameter used to balance the different components. In practice, we usually set \(\lambda\) to 1.
## 5 Results
Our trained models are tested on a dataset comprising of 112 unseen airfoil simulations, each with a different combination of turbulent inflow parameters. We show here qualitative and quantitative results for our models and perform experiments aiming to investigate limitations and improvements of the proposed framework.
Comparison of GNN layersIn Table 1, we gather and compare results for the three different types of GNN layers used in the Processor: GAT, GIN and GEN. The architecture of the Encoder and Decoder networks were kept constant, with a latent size for the node, edge and global features of \(N=128\). In the Grouped Reversible Processor, the number of Layers was set to \(L=30\), while the number of groups was chosen as \(C=4\). To obtain a consistent number of learnable parameters, the hyperparameters for each GNN layer type were carefully selected, more information about the different configurations can be found in the appendix. We also study the impact of the depth and width on the performance of each model in Appendix D.
Velocity reconstructionOne of the more challenging prediction tasks is the estimation of the velocity field away from the airfoil. Accurately capturing velocity shear and recirculation regions is non-trivial even for CFD simulators and is highly dependant on the airfoil shape and the inflow angle.
To complete this task sucessfully, the GNN needs to be expressive enough to propagate relevant information throughout the graph. Table 2 summarizes the prediction errors of the velocity field in multiple concentric regions around the airfoil for the three models. We observe that for all three models, the error on the x-velocity decreases further away from the airfoil, but this is not the case for the y-velocity.
Qualitative resultsFigure 4 displays some qualitative results for our two best performing models (revGAT and revGIN), compared to the CFD simulation ground truth. These results highlight the fact that the learned model is able to reconstruct flow features well, albeit with some artefacts. As the distance to the airfoil increases, these defects become more apparent. Moreover, some parts of the flow are not well captured. This is the case for flows exhibiting long wakes. On the other hand, we notice that flow features near the leading edge of the airfoil are in general well captured. We provide additional examples of reconstructed flows in Appendix E.
Farfield estimationFigure 5 displays the graph-level context prediction results evaluated on the test set for the revGIN model. The GNN is able to accurately predict farfield inflow velocity, owing to the good initial farfield estimation provided as an input. For the angle of attack estimation, we observe good results at small angles but less so for larger positive and negative angles. Prediction of the turbulence intensity is however relatively poor, which can be attributed to this variable having a lesser impact on the airfoil pressure distribution. Moreover, this variable is not directly set in the CFD simulations as it is used to calculate turbulent boundary conditions (kinetic energy \(k\) and specific rate of dissipation \(\omega\) of the \(k-\omega\) turbulence model), which makes it more difficult to retrieve in this inverse context.
## 6 Discussion
Our results indicate that the type of GNN architecture chosen in the Grouped Reversible Processor has a clear impact on the flow reconstruction quality. From our comparison, we find that, overall, using Graph Attention Network layers usually yields the best reconstructed solutions. However, we also observe that the Graph Isomorphic Network layer is better able to capture detached flows (see Appendix E). Perhaps a combination of the two could lead to better reconstructed flows, which could
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{Node reconstruction RMSE} & \multicolumn{3}{c}{Global parameter prediction RMSE} \\ & pressure & x-velocity & y-velocity & farfield velocity & angle of attack & turbulence intensity \\ & [Pa] & [m/s] & [m/s] & [\({}^{\circ}\)] & [-] \\ \hline revGAT & \(\mathbf{77.98\pm 17.99}\) & \(7.96\pm 1.19\) & \(\mathbf{2.69\pm 0.53}\) & \(0.92\pm 0.17\) & \(4.16\pm 0.10\) & \(0.04\pm 0.003\) \\ revGIN & \(158.37\pm 5.16\) & \(\mathbf{6.23\pm 0.32}\) & \(4.43\pm 0.22\) & \(\mathbf{0.45\pm 0.06}\) & \(4.12\pm 0.09\) & \(\mathbf{0.03\pm 0.003}\) \\ revGEN & \(137.51\pm 17.31\) & \(10.81\pm 0.25\) & \(5.47\pm 0.05\) & \(0.46\pm 0.05\) & \(\mathbf{4.12\pm 0.01}\) & \(0.04\pm 0.002\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Root mean squared prediction errors averaged over the test dataset for both the flow reconstruction task and the global parameter estimation task. Results are averaged over 3 runs with different initializations. All models have latent size of \(N=128\) and \(L=30\) layers. While the revGAT model performs the best in terms of pressure and y-velocity reconstruction, it is outperformed by the revGIN model when it comes to x-velocity reconstruction, and graph-level attribute prediction.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{x-velocity} & \multicolumn{3}{c}{y-velocity} \\ & & [m/s] & & [m/s] & \\ & revGAT & revGIN & revGEN & revGAT & revGIN & revGEN \\ \hline region 1 (a=0.6, b=0.1) & \(8.26\pm 1.34\) & \(\mathbf{6.46\pm 0.39}\) & \(114\pm 0.25\) & \(\mathbf{2.48\pm 0.60}\) & \(4.15\pm 0.18\) & \(5.40\pm 0.03\) \\ region 2 (a=0.7, b=0.15) & \(7.98\pm 1.25\) & \(\mathbf{6.30\pm 0.35}\) & \(11.00\pm 0.26\) & \(\mathbf{2.49\pm 0.43}\) & \(4.40\pm 0.23\) & \(5.52\pm 0.05\) \\ region 3 (a=0.8, b=0.2) & \(7.95\pm 1.24\) & \(\mathbf{6.27\pm 0.33}\) & \(10.92\pm 0.26\) & \(\mathbf{2.61\pm 0.55}\) & \(4.44\pm 0.23\) & \(5.53\pm 0.05\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Root mean squared prediction errors of the velocity components for different concentric regions of the flow, averaged over the test dataset. Each region is defined as the interior of an ellipse with \(a\) the length of semi-major axis and \(b\) the length of semi-minor (in chord lengths). Results are averaged over 3 runs with different initializations.
be for instance implemented by alternating GIN and GAT layers. Lastly, we find that the performance of the GEN layer to be somewhat underwhelming.
Something to note is that the first few layers of nodes surrounding the airfoil carry a disproportionate amount of relevant information which needs to be propagated outwards to an increasing number of nodes, thus creating artificial tree-like paths within the graph. While the Grouped Reversible framework provides a good way to train deep networks in order to circumvent this, other methods may also feasible. One possible solution might be to use hierarchies such as those implemented in Martinkus et al. (2021). Another option could be to apply a message-passing GNN in an iterative manner, with each step reconstructing increasingly large concentric bands around the airfoil. Physics-driven learning methods could also lead to potential improvements. For instance, the message-passing framework may well be suited to incorporate elements from the Lattice-Bolzmann method (Chen & Doolen, 1998) as it also functions in a similar two step algorithm (collision and streaming). Another possible option would be to minimize the gradient of the pressure, as described in Taha & Gonzalez (2021).
Figure 4: Reconstructed flow around two unseen arbitrary airfoils geometries at different inflow configurations for the revGAT and revGIN models. Figure (a) plots the reconstructed pressure field while the reconstructed velocity field is shown in Figure (b). For each case the ground truth is shown for comparison. Qualitatively, the revGAT model displays fewer artefacts in the solution.
Figure 5: Comparison of the global properties predicted by our revGIN model against the ground truth values, evaluated on the test dataset. The model is able to accurately predict farfield flow velocity thanks to a decent initial estimation (a) and to a lesser extent the angle of attack (b), but it falls short for farfield turbulence intensity prediction (c).
Conclusion
In this work we applied deep graph-based learning techniques to reconstruct pressure and velocity fields around arbitrary airfoil geometries subject to high-Reynolds turbulent flows. We show that, despite the challenges posed by this problem, such as the large graphs and the very localized input information, our Flow Reconstruction GNN framework is able to provide good reconstructed solutions, and infer contextual farfield flow information. We compared several message-passing architectures within the Grouped Reversible Processor GNN, and found that Graph Attention Network layers yielded the best reconstructed solutions. This work provides a flexible framework which may easily be applied to other mesh-based inverse physics problems, and which may be of significant interest to a number of engineering applications. |
2308.00558 | Gradient Scaling on Deep Spiking Neural Networks with Spike-Dependent
Local Information | Deep spiking neural networks (SNNs) are promising neural networks for their
model capacity from deep neural network architecture and energy efficiency from
SNNs' operations. To train deep SNNs, recently, spatio-temporal backpropagation
(STBP) with surrogate gradient was proposed. Although deep SNNs have been
successfully trained with STBP, they cannot fully utilize spike information. In
this work, we proposed gradient scaling with local spike information, which is
the relation between pre- and post-synaptic spikes. Considering the causality
between spikes, we could enhance the training performance of deep SNNs.
According to our experiments, we could achieve higher accuracy with lower
spikes by adopting the gradient scaling on image classification tasks, such as
CIFAR10 and CIFAR100. | Seongsik Park, Jeonghee Jo, Jongkil Park, Yeonjoo Jeong, Jaewook Kim, Suyoun Lee, Joon Young Kwak, Inho Kim, Jong-Keuk Park, Kyeong Seok Lee, Gye Weon Hwang, Hyun Jae Jang | 2023-08-01T13:58:21Z | http://arxiv.org/abs/2308.00558v1 | # Gradient Scaling on Deep Spiking Neural Networks
###### Abstract
Deep spiking neural networks (SNNs) are promising neural networks for their model capacity from deep neural network architecture and energy efficiency from SNNs' operations. To train deep SNNs, recently, spatio-temporal backpropagation (STBP) with surrogate gradient was proposed. Although deep SNNs have been successfully trained with STBP, they cannot fully utilize spike information. In this work, we proposed gradient scaling with local spike information, which is the relation between pre- and post-synaptic spikes. Considering the causality between spikes, we could enhance the training performance of deep SNNs. According to our experiments, we could achieve higher accuracy with lower spikes by adopting the gradient scaling on image classification tasks, such as CIFAR10 and CIFAR100.
Machine Learning, Deep SNNs, Deep Neural Networks
## 1 Introduction
Deep learning with deep neural networks (DNNs) have been rapidly advancing artificial intelligence (AI) technology in various fields (LeCun et al., 2015; Tan and Le, 2019). However, as AI technology continues to progress, it demands more energy and computing resources, raising concerns about sustainable development and application. Spiking neural networks (SNNs) have received considerable attention as a solution to this problem. SNNs, which have been considered third-generation artificial neural networks, enable event-based computing, resulting in sparse operations compared to DNNs (Maass, 1997). Furthermore, SNNs hold great importance as the basis for neuromorphic computing, which imitates the operations of the human brain for its exceptional energy efficiency (Davies et al., 2018; Roy et al., 2019).
Deep SNNs have been actively studied to combine the features of both DNNs and SNNs: the model capacity of the former and the energy efficiency of the latter. Deep SNNs have a similar synaptic topology to DNNs, with interconnected spiking neurons. While deep SNNs can leverage the advantages of both DNNs and SNNs, they face challenges in training. As an indirect training method for deep SNNs, DNN-to-SNN conversion has been proposed. Although this approach has enabled the implementation of various deep SNN models (Park et al., 2019; Kim et al., 2020; Park et al., 2020; Li et al., 2021; Bu et al., 2022), it has introduced issues, such as long inference latency (Han et al., 2020).
Recently, to improve the training performance, a gradient-based training algorithm, which is a successful training approach for DNNs, has been applied to training deep SNNs, such as spatio-temporal backpropagation (STBP) (Wu et al., 2018). This method with surrogate gradient, which can handle the non-differentiability of spiking neurons, has proven to be effective in training deep SNNs. Based on the successful training, gradient-based training approaches have inspired further research about improving the training performance of deep SNNs (Zheng et al., 2021; Deng et al., 2022; Yang et al., 2022). Furthermore, it has enabled the expansion of deep SNNs in various applications and algorithms, including Transformer models (Zhou et al., 2023) and neural architecture search algorithms (Na et al., 2022).
Gradient-based training algorithms have allowed deep SNNs to utilize their model capacity sufficiently. However, these algorithms cannot exploit the dynamic characteristics of SNNs as they are derived from DNNs. Unlike DNNs, SNNs have spatio-temporal features, and spiking neurons transmit information in the form of spikes. Thus, to maximize the training performance of deep SNNs, we proposed a training algorithm, called gradient scale, that can consider the spike dynamics in SNNs. We were inspired by spike-timing-dependent plasticity (STDP), which is a biologically plausible training algorithm of SNNs with local spike causality (Diehl and Cook, 2015). While utilizing the training performance of gradient-based algorithms, we adjusted gradients depending on the local spike relationships, which can be defined by the causality between spikes of pre- and post
synaptic neurons. The proposed algorithm was evaluated on ResNet architectures (He et al., 2016) with image classification tasks, such as CIFAR10 and CIFAR100 (Krizhevsky, 2009).
## 2 Related Works
### Spiking Neural Networks
SNNs consist of spiking neurons and synapses that connect them. Mimicking the behavior of the brain, spiking neurons exchange information with binary spikes through synapses. Because of the spike-based operation, SNNs have been expected to enable event-driven computing, which is a next-generation and energy-efficient computing paradigm. Thus, SNNs are promising neural networks for energy-efficient artificial intelligence as a fundamental component of neuromorphic computing that mimics the operations of the human brain.
Although there are various types of spiking neurons, such as izhikevich, leaky integrate-and-fire (LIF), and integrate-and-fire (IF) neuron models (Izhikevich, 2004), neurons commonly operate in an integrate-and-fire manner. Spiking neurons integrate incoming information into the internal state, called membrane potential, and fire spikes whenever the potential exceeds a threshold voltage. Due to the low complexity of computation, most deep SNNs adopt relatively simple spiking neuron models, such as IF and LIF. Thus, in this work, we used an LIF neuron model, which is described as
\[u_{j}^{l}(t)=\tau u_{j}^{l}(t\text{-}1)+z_{j}^{l}(t), \tag{1}\]
where \(\tau\) is a leak constant, \(u_{j}^{l}(t)\) and \(z_{j}^{l}(t)\) are the membrane potential and incoming information of the \(j\)th spiking neuron in \(l\)th layer at time step \(t\), respectively. The incoming information, called post-synaptic potential (PSP), is caused by pre-synaptic spikes (input spikes) as
\[z_{j}^{l}(t)=\sum_{i}w_{ij}^{l}s_{i}^{l\text{-}1}(t)+b_{j}^{l}, \tag{2}\]
where \(w\) and \(b\) are the synaptic weight and bias, respectively. When the accumulated information on the membrane potential exceeds a certain threshold, spikes are generated, and the information is transmitted to adjacent neurons through synapses. Spike generation can be expressed as
\[s_{j}^{l}(t)=H(u_{j}^{l}(t)-v_{\text{th},j}^{l}(t)), \tag{3}\]
where \(H\) is the Heaviside step function, and \(v_{\text{th}}\) is a threshold voltage. When a spike is generated, the membrane potential is reset. There are mainly two reset methods: soft and hard reset can be stated as
\[u_{j}^{l}(t)=\begin{cases}u_{j}^{l}(t)-s_{j}^{l}(t)v_{\text{th},j}^{l}(t)&\text {(soft)}\\ (s_{j}^{l}(t)-1)u_{j}^{l}(t)+s_{j}^{l}(t)v_{r,j}^{l}(t)&\text{(hard)},\end{cases} \tag{4}\]
where \(v_{\text{r}}\) is a rest potential.
### Training Methods of deep SNNs
Training algorithms of deep SNNs can be categorized into two approaches: indirect and direct training. Indirect training, which is represented by DNN-to-SNN conversion, transforms a pre-trained DNN model into deep SNN with the same topology, and the converted SNN only performs inference. This approach has been successfully applied to various neural network architectures (Sengupta et al., 2019; Han et al., 2020), applications (Kim et al., 2020), and neural codings (Park et al., 2019; Zhang et al., 2019; Park et al., 2020). However, it had drawbacks, such as long inference latency, due to disregarding features of SNNs during the training of DNNs. Certain studies have attempted to address these limitations with calibration (Li et al., 2021) and SNN-aware DNN training (Bu et al., 2022), but there still remain limitations that it is challenging to directly consider the dynamics of deep SNNs.
Direct training is a promising approach for high-performance and efficient deep SNNs. It can be mainly divided into unsupervised and supervised learning; which are represented by STDP and stochastic gradient descent (SGD), respectively. STDP is a biologically plausible training algorithm that considers the causal relationship between the spikes of pre- and post-synaptic neurons (Diehl and Cook,
Figure 1: Spike trace and proposed gradient scaling (GS).
2015). While it takes into account the characteristics of SNNs, its low training performance compared to other algorithms has limited its application in deep SNN training.
The gradient-based training algorithm of deep SNNs leverages successful training algorithms from DNNs, such as SGD and error backpropagation. One of the significant obstacles to training deep SNNs with a gradient-based algorithm was the non-differentiability of spiking neurons as depicted in Eq. 3. To overcome this, STBP with a surrogate gradient, which approximates the gradient, was proposed and could train deep SNNs successfully (Wu et al., 2018). Since then, subsequent studies on improving the training performance of deep SNNs have been published, such as threshold-dependent batch normalization (tdBN) (Zheng et al., 2021), temporal effective batch normalization (Duan et al., 2022), and temporal efficient training with time-variant target distribution (Deng et al., 2022). However, these training algorithms did not utilize local information that can improve the training performance. Recently, a study using local information for training was published, but it did not utilize relationships between spikes (Yang et al., 2022).
## 3 Methods
With the introduction of scalable training algorithms, such as STBP, deep SNNs have become trainable with gradients. However, these existing gradient-based algorithms for deep SNNs have a limitation in that they do not effectively consider the causal relationship between spikes of pre- and post-synaptic neurons. Thus, in this work, we propose a method to exploit the local spike information in training deep SNNs with the gradient-based algorithm.
Before explaining the proposed method, we should define the relational expression of spikes. There are various representations for spike relation, but, in this work, we adopt trace-based representation for its low computational complexity, which is suitable for deep SNNs (Morrison et al., 2008). An example of the representation is shown in Fig. 1. \(S_{\text{pre}}\) and \(S_{\text{post}}\) indicate pre- and post-synaptic spike trains, respectively. The history of spike generation in each neuron is recorded in the spike trace \(x\) as follows:
\[x_{i}^{l}(t)=e^{\text{-1}}x_{i}^{l}(t\text{-1})+s_{i}^{l}(t). \tag{5}\]
The spike trace increases by a spike when the neuron fires and exponentially decreases at each time step of the forward (blue dotted line in Fig. 1) Each layer has pre- and post-synaptic traces (\(X_{\text{pre}}^{l}\), \(X_{\text{post}}^{l}\)) according to its connection. With these two spike traces, we defined the relationship of spikes \(R\) as
\[R^{l}(t)=f^{l}(X_{\text{pre}}^{l}(t),X_{\text{post}}^{l}(t)), \tag{6}\]
where \(f^{l}\) is a relationship function of \(l\)th layer. During training, it is calculated in the backward path (orange dotted line in Fig. 1). We used convolution and outer product operations for the relationship function \(f\) of convolution and fully connected layers, respectively.
We proposed a gradient scale that adjusts the gradient of synaptic weight according to the local relationship of the spike. Inspired by STDP, the proposed algorithm encourages training with the gradient when there is a causal relationship between pre- and post-synaptic spikes. Otherwise, if there is less relationship, the algorithm hinders the training. We implemented this encouragement and hindrance by scaling the gradients of synaptic weights obtained from STBP as follows:
\[\Delta W^{l}=-\eta g(\frac{\delta L}{\delta W^{l}},R^{l})=-\eta(\alpha\frac{ \delta L}{\delta W^{l}}\circ R^{l}+(1-\alpha)\frac{\delta L}{\delta W^{l}}), \tag{7}\]
\begin{table}
\begin{tabular}{l|c|c c c c} \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Reset} & \multicolumn{2}{c|}{Accuracy (\%)} & \multicolumn{2}{c}{\# of Spikes (K)} \\ & & Mean & Max & Mean & Max \\ \hline \hline \multicolumn{6}{l}{ResNet20} \\ \hline Baseline & soft & 94.91 & 94.97 & 497 & 520 \\ Gradient scale & soft & **95.05** & **95.11** & **492** & **506** \\ \hline Baseline & hard & **93.68** & **93.86** & 463 & **470** \\ Gradient scale & hard & 93.58 & 93.69 & **451** & 475 \\ \hline \hline \multicolumn{6}{l}{ResNet32} \\ \hline Baseline & soft & 95.00 & 95.16 & 814 & 879 \\ Gradient scale & soft & **95.12** & **95.22** & **783** & **822** \\ \hline Baseline & hard & 90.52 & 90.71 & **555** & 572 \\ Gradient scale & hard & **90.57** & **90.91** & 563 & **571** \\ \hline \end{tabular}
\end{table}
Table 1: Accuracy and spikes on various configurations with CIFAR10 (training results of four times repetitions)
\begin{table}
\begin{tabular}{l|c|c c c c} \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Reset} & \multicolumn{2}{c|}{Accuracy (\%)} & \multicolumn{2}{c}{\# of Spikes (K)} \\ & & Mean & Max & Mean & Max \\ \hline \hline \multicolumn{6}{l}{ResNet20} \\ \hline Baseline & soft & 74.83 & 75.18 & 641 & 649 \\ Gradient scale & soft & **75.20** & **75.88** & **634** & **642** \\ \hline Baseline & hard & 72.26 & 72.37 & 562 & 573 \\ Gradient scale & hard & **72.29** & **72.42** & **542** & **555** \\ \hline \hline \multicolumn{6}{l}{ResNet32} \\ \hline Baseline & soft & **75.57** & **75.79** & 959 & 984 \\ Gradient scale & soft & 75.45 & 75.73 & **954** & **979** \\ \hline Baseline & hard & 61.96 & 62.12 & 769 & **782** \\ Gradient scale & hard & **62.03** & **62.19** & **759** & 788 \\ \hline \end{tabular}
\end{table}
Table 2: Accuracy and spikes on various configurations with CIFAR100 (training results of four times repetitions)
where \(L\) is a loss, \(\eta\) is a learning rate, \(g\) is a gradient scaling function, \(\alpha\) is a interpolation coefficient, and \(\circ\) is element-wise multiplication (Hadamard product). The gradient scaling function \(g\) receives the gradients and spike relationship as inputs. As described in Eq. 7, we adopted a simple linear interpolation function for the scaling. In this work, we set the coefficient \(\alpha\) to 0.1 empirically.
## 4 Experiments
### Experimental Setup
To evaluate the effectiveness of the proposed gradient scaling, we set STBP (Wu et al., 2018) and tdBN (Zheng et al., 2021) as a baseline training algorithm. For each configuration, we trained deep SNN models for 300 epochs using SGD. We adopted a learning rate schedule in which the learning rate decreased to 0.1 times every 100 epochs. We used LIF neurons with the leak constant \(\tau\) of 0.9, and the time step was fixed to four. We constructed deep SNN models based on ResNet20 and ResNet32 architectures and trained them on image classification datasets, such as CIFAR10 and CIFAR100. For data augmentation, Cutmix (Yun et al., 2019) was used, and for input encoding, real value encoding was applied as in other studies (Wu et al., 2018; Zheng et al., 2021).
### Experimental Results
The experimental results on CIFAR10 and CIFAR100 are presented in Tables 1 and 2, respectively. We compared the training results of the baseline and proposed methods in various configurations of the model architecture and reset method of spiking neurons. For fair and precise evaluations, we recorded the mean and maximum results for test accuracy and spike count after training four times on each configuration. For the accuracy on CIFAR10 dataset, the mean and maximum accuracy were improved when the proposed gradient scale was applied in all cases except for the case of ResNet20 with the hard reset, as shown in Table 1. Furthermore, the proposed approach can reduce the number of spikes in most cases. There are similar trends in training results on CIFAR100, as depicted in Table 2. The accuracy and spike counts are improved in most cases except ResNet32 with the soft reset and the hard reset, respectively, with the proposed methods.
Table 3 presents the comparisons of the proposed method with other deep SNN training methods. For a fair comparison, we compared the results of a model structure similar to ResNet20 on the CIFAR10 dataset. Overall, the proposed approach shows higher training performance than the recent previous methods. In the case of soft reset, when the proposed method is applied, we achieve higher accuracy with shorter time steps than the conversion methods (Li et al., 2021; Bu et al., 2022) and local tandem learning (Yang et al., 2022). In the case of hard reset, it shows higher accuracy than tdBN (Zheng et al., 2021), but lower training performance than TET (Deng et al., 2022). It was difficult to compare other metrics, such as spike counts, in this study as they were not commonly reported in previous works.
## 5 Discussion
The proposed training algorithm can be further improved with optimization of relation function \(f\), scaling function \(g\), and hyperparameters, such as \(\alpha\) in Eq. 7. In this paper, a simple spike relation function, which only considers the positive relation between the spike traces, and scaling function was used to show the feasibility of enhancement training performance with local spike information. In order to the improvement, we can consider more complicated relation functions of pre- and post-synaptic spike traces, which consider a negative relation of the spike traces as the STDP learning rule. Furthermore, we can use other scaling functions based on theoretical analysis of deep SNNs, instead of linear interpolation as in this work.
## 6 Conclusion
In this paper, we proposed a training method for deep SNNs with spike-dependent local information. The pro
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Training} & \multirow{2}{*}{Architecture} & \multirow{2}{*}{Neuron} & \multirow{2}{*}{Reset} & \multirow{2}{*}{T} & ANN & SNN \\ & & & & & Acc. (\%) & Acc. (\%) \\ \hline Calibration (Li et al., 2021) & conversion & ResNet20 & IF & soft & 32 & 95.46 & 94.78 \\ SNN-aware training (Bu et al., 2022) & conversion & ResNet18 & IF & soft & 4 & 96.04 & 90.43 \\ tdBN (Zheng et al., 2021) & direct & ResNet19 & LIF & hard & 4 & - & 92.92 \\ TET (Deng et al., 2022) & direct & ResNet19 & LIF & hard & 4 & - & 94.44 \\ Local tandem (Yang et al., 2022) & KD & ResNet20 & LIF & soft & 16 & 95.36 & 94.76 \\ Gradient scale (Ours) & direct & ResNet20 & LIF & soft & 4 & - & 95.11 \\ Gradient scale (Ours) & direct & ResNet20 & LIF & hard & 4 & - & 93.69 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparisons with previous methods on CIFAR10 (KD: Knowledge Distillation)
posed method, which is compatible with gradient-based training algorithms, such as STBP, scales the gradient of synaptic weight according to the relationship between spike traces of adjacent neurons. We verified the effectiveness of the proposed approach with ResNet architecture on CIFAR datasets. In the future, we will improve the proposed algorithm through exploration and optimization of the spike relation function and gradient scaling function. In addition, we will evaluate the algorithm with other model architectures and datasets. We believe that by taking into account the characteristics of SNNs and utilizing local information, the training performance of deep SNNs can be improved.
## Acknowledgements
This work was supported in part by the Korea Institute of Science and Technology (KIST) through 2E32260 and the National Research Foundation of Korea (NRF) grant funded by the Korea government (Ministry of Science and ICT) [NRF-2021R1C1C2010454].
|
2303.02015 | The Grossberg Code: Universal Neural Network Signatures of Perceptual
Experience | Two universal functional principles of Adaptive Resonance Theory simulate the
brain code of all biological learning and adaptive intelligence. Low level
representations of multisensory stimuli in their immediate environmental
context are formed on the basis of bottom up activation and under the control
of top down matching rules that integrate high level long term traces of
contextual configuration. These universal coding principles lead to the
establishment of lasting brain signatures of perceptual experience in all
living species, from aplysiae to primates. They are revisited in this paper
here on the basis of examples drawn from the original code and from some of the
most recent related empirical findings on contextual modulation in the brain,
highlighting the potential of Grossberg's pioneering insights and
groundbreaking theoretical work for intelligent solutions in the domain of
developmental and cognitive robotics. | Birgitta Dresp-Langley | 2023-03-03T15:31:14Z | http://arxiv.org/abs/2303.02015v1 | Cite as:** Dresp-Langley B. The Grossberg Code: Universal Neural Network Signatures of Perceptual Experience. _Information_. 2023; 14(2):82. [https://doi.org/10.3390/info14020082](https://doi.org/10.3390/info14020082)
###### Abstract
Two universal functional principles of Grossberg's Adaptive Resonance Theory [19] decipher the brain code of all biological learning and adaptive intelligence. Low-level representations of multisensory stimuli in their immediate environmental context are formed on the basis of _bottom-up activation_ and under the control of _top-down matching_ rules that integrate high-level long-term traces of contextual configuration. These universal coding principles lead to the establishment of lasting brain signatures of perceptual experience in all living species: from _aplysiae_ to primates. They are re-visited in this concept paper here on the basis of examples drawn from the original code and from some of the most recent related empirical findings on contextual modulation in the brain, highlighting the potential of Grossberg's pioneering insights and groundbreaking theoretical work for intelligent solutions in the domain of developmental and cognitive robotics.
multisensory perception; brain representation; contextual modulation; adaptive resonance; biological learning; self-organization; matching rules; winner-take-all principle; +
Footnote †: journal: Information
## 1 Introduction
In his latest book [1], Grossberg discusses empirical findings and his own neural network models to illustrate, and forecast, how autonomous adaptive intelligence [2] is or may be implemented in artificial systems at unprecedentedly high levels of brain function [3, 4, 5]. His account of how the brain generates conscious cognition and, ultimately, individual minds provides mechanistic insights into complex phenomena such as mental disorders, or the biological basis of morality and religion. The author's theoretical work clarifies why evolutionary pressure towards adaptation and behavioral success not only explains the brain, but is also a source for model solutions to large-scale problems in machine learning, technology, and Artificial Intelligence. Adaptive brain mechanisms [6] are the key to autonomously intelligent algorithms and robots. They may be pre-determined by a universal developmental code, or "engram", that is channeled through the connectome by specific proteins/peptides embedded within pre-synaptic neuronal membranes [7] and corresponds to information provided by the electrical currents afferent to pre-synaptic neurons [8, 9, 10]. Grossberg's book [1] conveys a philosophical standpoint on shared laws of function in living systems, from the most primitive to the most advanced, showing how neurons support unsupervised adaptive learning in all known species, and how such biological learning has enabled the emergence of the human mind across the evolutionary process. Bearing this in mind, the present concept paper draws from the beginnings of this journey into the mind, which is described by Grossberg's significant early work on neural processes for perception, perceptual learning, and memory, aimed at understanding how the brain builds a cognitive code of physical reality. Since perception is the first step through which a brain derives sense from the raw data of a physical
environment, his account for how elementary signals in the physical environment are processed by the neural networks of the brain was a mandatory achievement for understanding how inner representations of the outside world may be generated [11,12]. The ability to derive meaning from complex sensory input requires the integration of information over space and time as well as memory mechanisms to shape that integration [13] into contents of experience. In mammals with intact visual systems, this relies on processes in the primary visual cortex of the brain [14], where neurons integrate visual input along shape contours into neural association fields [15]. The geometric selectivity of ensembles of functionally dedicated neural networks is progressively fine-tuned by contextual modulation and experience towards long-term memory representation of all the different configurations likely to be encountered in natural scenes. Horizontal cortical connections provide a broad domain of potential associations in this process, and top-down control functions dynamically gate these associations to task switch the function of a given network [16]. Grossberg's work has provided a unified model of brain learning where horizontal cortical connections provide a broad range of potential, functionally specific neural associations through a mechanism called bottom-up activation [17], as will be explained and illustrated on the basis of examples. Mechanisms of adaptive resonance and top-down matching [17] then explain how the contextual modulation of visual and other sensory input drives dynamic brain learning to gate the links within and between neural association fields towards increasingly complex memory representations [16,18]. This concept paper uses two of the functional principles of Adaptive Resonance Theory [19] to illustrate the implications for unsupervised brain learning and adaptive intelligence. The examples chosen here is drawn from the original models and from related empirical findings. These are revisited under the light of some of the most recent advances in a conceptual discussion aimed at highlighting the potential of Grossberg's pioneering insights and groundbreaking theoretical work for intelligent solutions to some of the most difficult current problems in Artificial Intelligence (AI) and robotics. The following sections will elaborate on the biological principles of multisensory contextual modulation in the brain, in section 2, to illustrate the relevance of adaptive resonant learning as conceptualized in the Grossberg code, the functional principles of which are then explained further in section 3. Section 4 provides a generic ART system with its mathematical definition and an example of neural network architecture that could be implemented on this basis for autonomous and self-organizing multiple event coding to help control object-related aspects of environmental uncertainty in robotics.
## 2 Contextual Modulation in the Brain
The brain processes local information depending on the context in which this information is embedded. The representation of contextual information peripheral to a salient stimulus is critical to an individual's ability to correctly interpret and flexibly respond to stimuli in the environment. The processes and circuits underlying context-dependent modulation of stimulus-response function have mostly been studied in vertebrates [20], yet, well-characterized connectivity patterns are already found in the brains of lower level species such as insects [21], providing circuit-level insights into contextual processing. Recent studies in flies have revealed neuronal mechanisms that create flexible and highly context-dependent behavioral responses to sensory events relating to threats, food, and social interaction. Throughout brain evolution, functional building blocks of neural network architectures, with increasingly complex functional architecture have emerged across species, with increasingly complex long-range connectivity ensuring information encoding in processing streams that are anatomically segregated at a cellular level. The functional specificity of individual streams, long-range interactions beyond the classic receptive field of neurons and interneurons [22], and cortical feedback mechanisms [21, 23] provide an excellent model for understanding the complex processing characteristics inherent to individual streams as well as the extent and mechanisms of their interaction in the genesis of brain representation. Contextual modulation in the sensory cortex coding for vision, hearing, somatosensation and olfaction is partly
under central control by the prefrontal cortex, as shown by some of the most recent evidence from neuroscience.
### Vision
To be able to extract structure, form, and meaning from intrinsically ambiguous and noisy physical environments, the visual brain has evolved neural mechanisms dedicated to the integration of local information into global perceptual representations. This integration is subject to contextual modulation [23]. Mechanisms with differential sensitivity to relative stimulus orientation, size, relative position, contrast, polarity, and color operate within specific spatial scales to integrate local visual input into globally perceived structure [24,25,26,27]. The differential contextual sensitivity to color and luminance contrast in visual contextual modulation involves the luminance sensitive pathways (M-pathways) and the color sensitive pathways (P-pathways) of the visual brain [23,28] in a from-simple-to-complex-cells processing hierarchy at the level of the visual cortex, already predicted in Grossberg's early models of visual form representation [29,30]. The cooperative and competitive interactions between co-activating or mutually suppressive detectors in functionally dedicated neural networks suggested in the model were confirmed several years later in psychophysical and electrophysiological studies, taking into account response characteristics of orientation-selective visual cortical neurons as a function of the context in which visual target stimuli were presented [22,24,25]. Contextual modulation translates into effects where nearby visual stimuli either facilitate or suppress the detection of the targets (behavior), and increase or decrease the firing rates of the cortical neurons responding to the targets (brain). The cooperative and competitive brain-behavior loops depend on the geometry of so-called "perceptive fields" [22] within a limited range of size-distance ratios. The shorter temporal windows of achromatic context effects compared with chromatic contextual modulation [23,27] Cooperative mechanisms of contextual modulation in vision are subject to substantial practice (perceptual learning) effects, where top-down signals dynamically modulate neural network activities as a function of specific perceptual task constraints. Such top-down mediated changes in cortical states reflect a general mechanism of synaptic learning [4,8], potentiating or suppressing neural network function(s) depending on contextual relevance.
### Hearing
Sounds in natural acoustic environments possess highly complex spectral and temporal structures, spanning over a whole range of frequencies, and with temporal modulations that differ within frequency bands. The auditory brain has the ability to reliably encode one and the same sound in a variety of different sound contexts, and to tell apart different sounds within a complex acoustic scene. Processing acoustic features such as sound frequency and duration is highly dependent on co-occurring, acoustic and other, sources of stimulation [32], and involves interactions between the external spectral and temporal context of an auditory target, and internal behavioral states of the individual such as arousal or expectation. Current findings suggest that sensory attenuation and neuronal modulation may happen during behavioral action as a consequence of disrupted memory expectations in the case of unpredictable concurrent sounds [33]. The auditory system demonstrates nonlinear sensitivity to temporal and spectral context, often employing network-level mechanisms, such as cross-band and temporally adaptive inhibition, to modulate stimulus responses across time and frequency [32]. How the auditory system modulates responses to sensory and behavioral contexts is not yet understood. The superior colliculus (SC) is a structure in the mammalian midbrain that contains visual and auditory neural circuits. In mice [34], auditory pathways from external nuclei of the inferior colliculus (IC) with direct inhibitory connections, and excitatory signals driving feed-forward inhibitory circuits within the SC were found. A previously unrecognized pathway, the lateral posterior nucleus (LP) of the thalamus projects extensively to sensory cortices. Bidirectional activity modulations in LP or its projection to the primary auditory cortex (A1) in awake mice reveal that LP improves auditory processing by sharpening neuronal
receptive fields and their frequency tuning [35]. LP is strongly activated by specific sensory signals relayed from the superior colliculus (SC), contributing to the maintenance and enhancement of sound signal processing in the presence of auditory background noise and threatening visual stimuli respectively. This shows that multisensory bottom-up pathways plays a role in contextual [36] and cross-modality modulation of auditory cortical processing in mammals. Cross-modality modulation of sensory perception is necessary for survival. In a natural environment, organisms are constantly exposed to a continuous stream of sensory input depending on the environmental context. The response properties of neurons, dynamically adjust to contextual changes across all sensory modalities, and at different stages of processing from periphery to cortex.
### Somatosensation
Cross-modality modulation implies that coincident non-auditory (visual, tactile) processing influences the neural networks underlying contextual modulation of hearing, or that non-visual (auditory, tactile) signals may reach the neural networks underlying the contextual modulation of vision. Touch has a direct effect on visual spatial contextual processing, for example [37]. Contextual modulation and neuronal adaptation in visual and auditory systems interact with sensory adaptation in the somatosensory system, but through which pathways and mechanisms is not yet well understood. The ability to integrate information from different sensory modalities is a fundamental feature of all sensory neurons across brain areas, which makes sense under the light of the fact that visual, auditory, and tactile signals originate from the same physical object when actively manipulated. The synthesis of multiple sensory cues in the brain improves the accuracy and speed of behavioral responses [38]. Task-relevant visual, auditory and tactile signals are experienced together in motor tasks [39], and pioneering work in neurophysiology from the 1960ies has shown convergence of visual, auditory, and somatosensory signals at the level of the pre-frontal cortex in cats [40]. Also, visual signals can bypass the primary visual cortex to directly reach the motor cortex, which is immediately adjacent and functionally connected to the somatosensory cortex [41]. Effects of neuronal adaptation on response dynamics and encoding efficiency of neurons at single cell and population levels in the whisker-mediated touch system in rodents illustrate that sensory adaptation provides context-dependent functional mechanisms for noise reduction in visual processing [42]. Between integration and coincidence detection, cross-modality modulation achieves energy conservation and disambiguates the encoding of principal features of tactile stimuli. Sensory systems do not develop and function independently. Early loss of vision, for example, alters the coding of sensory input in primary somatosensory cortex (S1) to promote enhanced tactile discrimination. Neural response modulation in S1 of mammals (opossums in this case) after elimination of visual input through bilateral enucleation early in development reveal neural origins of tactile experience in naturally occurring patterns of exploratory behavior after vision loss [43]. In early blind animals, overall levels of tactile experience were similar to those of sighted controls, and their locomotion activity was unimpaired and accompanied by normal whisking. Early blind animals exhibit a reduction in the magnitude of neural responses to whisker stimuli in S1, combined with a spatial sharpening of the neuronal receptive fields. The increased selectivity of S1 neurons in early blind animals is reflected by improved population coding of whisker stimulus positions, particularly along the axis of the snout aligned with the primary axis of the natural whisker motion. These findings suggest that a functionally distinct form of tactile (somatosensory) plasticity occurs when vision is lost early in development. After sensory loss, compensatory behavior mediated through the spared senses is generated through recruitment of brain areas associated with the deprived sense. Alternatively, functional compensation in spared modalities may be achieved through a combination of plasticity in brain areas corresponding to both spared and deprived sensory modalities.
Multisensory interactions in the brain need are most strongly relied upon and, therefore, need to be optimal when the stimulus ambiguity in a physical environment is highest [44]. Sensorial as well as central cross-modal signaling mechanisms contribute bottom-up and top-down contextual signaling. For example, both whisking and breathing are affected by the presence of odors in rodents, and the odors bi-directionally modulate activity in a small but significant population of the barrel cortex neurons through distinct bottom-up and top-down mechanisms [45]. In the human brain, different aspects of olfactory perception in space and time have been identified by means of EEG recordings [46]. Sensorial (low-level) representations of smell expand into larger areas associated with emotional, semantic, and memory processing in activities significantly associated with perception. These results suggest that initial odor information coded in the olfactory areas evolves towards perceptual realization through computations (long-range mechanism) in widely distributed cortical regions with different spatiotemporal dynamics [47]. Specific brain structures act appear to form hubs for integrating local multisensory cues into a spatial framework [48] enabling short-term as well as long-lasting memory traces of odors, touch sensations, sounds and visual objects in different dynamic contexts. Contextual modulation in the brain thus explains how olfactory and other sensory inputs translate into diverse and complex perceptions such as the pleasurable floral smell of flowers or the aversive smells of decaying matter. The prefrontal cortex (PFC) plays an important role in this process. Recent evidence suggests that the PFC has dedicated neural networks that receive input from olfactory regions, and that the activity of these networks is coordinated on the basis of selective attention producing different brain alert states [49].
### Prefrontal control
In the mammalian brain, information processing in specific sensory regions interacts with global mechanisms of multisensory integration under the control of the PFC. Emerging experimental evidence suggests that the contribution of multisensory integration to sensory perception is far more complex than previously expected [42, 43]. Associative areas such as the prefrontal cortex, which receive and integrate inputs from diverse sensory modalities, not only affect information processing in modal sensory pathways through down-stream signaling, but also influence contextual modulation and multisensory processing (Fig. 1).
Figure 1: The prefrontal cortex receives and integrates signals from diverse sensory structures and pathways, and controls information processing in, and interaction between,
modal neural networks (visual, somatosensory, auditory, olfactory) through down-stream signaling.
Developmental mechanisms account for the interaction between the neuronal networks involved [50], with relevance for brain-inspired intelligent robotics, as will be discussed further later herein. In animals and humans, prefrontal downstream control is necessary in cases of conflicting sensory information, where signals from different modalities compete, or provide incongruent input data [51]. The brain then needs to reach a probabilistic decision on the basis of top-down control signals (perceptual experience). However, another remarkable ability of the brain is capacity to rapidly detect unexpected stimuli. Living beings depend on rapid detection of the unexpected when it is relevant (i.e. an alarm going off, for example) because it enables them to adapt behavior accordingly and swiftly. Prefrontal control also explains why irrelevant sounds are incidentally processed in association with the environmental context even though the contextual stimuli activate different sensory modalities [52]. This is consistent with brain data showing that top-down effects of the prefrontal cortex on contextual modulation of visual and auditory processing depend on selective attention to a particular sensory signal [53] among several coincident stimuli. Attempts to understand how functional interaction between different brain regions occurs through multisensory integration constitute a leading edge research area in contemporary neuroscience [54]. Low-level brain representation of information is not enough to explain how we perceive the world. To enable us to recognize and adaptively act upon objects in the physical world, lower-level sensory network representations need to interact with higher-level brain networks capable of coding contextual relevance.
## 3 Brain signatures of perceptual experience
How the brain generates short and long-term memory signatures of perceptual experience, and which mechanisms permit to retrieve and update these traces regularly during life-long brain learning and development (ontogenesis), is still not fully known. Well before contextual modulation and context-sensitive neural mechanisms were identified in neural circuits of different species, Grossberg had understood that they must exist and, considering the principles of unsupervised synaptic (Hebbian) learning [8], which had been demonstrated in low-level species such as _aplysia_[55], that they would have to be universal. In his early work on adaptive resonance [19], he proposed universal functional principles for the generation of short-term and long-term memory traces and their activation in context-sensitive processes of retrieval. These functional principles exploit two mechanisms of neural information processing in resonant circuits of the brain, referred to as bottom-up automatic activation and top-down matching.
### Bottom-Up Automatic Activation
Bottom-up automatic activation is a mechanism for the processing and the temporary storage of perceptual input in short-term and working memory. Through bottom-up automatic activation, a group of cells within a given neural structure becomes potentiated, and is eventually activated, when it receives the necessary bottom-up signals. These bottom-up signals may or may not be consciously experienced. They are then multiplied by adaptive weights that represent long-term memory traces and influence the activation of cells at a higher processing level. Grossberg [17] originally proposed Bottom-Up Automatic Activation to account for the way in which pre-attentive processes generate learning in the absence of top-down attention or expectation. It appears that this mechanism is equally well suited to explain how subliminal signals may trigger supraliminal neural activities in the absence of phenomenal awareness [56,57]. Learning in the absence of phenomenal awareness accounts for visual statistical learning in newborn infants [58], and non-conscious visual recognition [59], for example. Bottom-up automatic activation may generate supraliminal brain signals, or representational contents with weak adaptive weights, as a candidate mechanism
to explain how the brain manages to subliminally process perceptual input [60] that is either not directly relevant at a given moment in time, or cannot be made available to conscious processing because of a local brain lesion [59]. Grossberg [9, 12, 17, 19] suggested that bottom-up activation may automatically activate target cell populations at higher levels of processing, as in bottom-up activation of the PCF by sensory cortices [47, 49], for example.
### Top-Down Matching
Top-down expectations are needed to consolidate traces of bottom-up representation through mechanisms that obey three properties: 1) they select consistent bottom-up signals and 2) suppress inconsistent bottom-up signals. Together these properties initiate a process that directs attention to a set of critical features that are consistent with a learned expectation. However, 3) a top-down expectation by itself cannot fully activate target cells. It can only sensitize, modulate, or prime the cells to respond more easily and vigorously if they are matched by consistent and sufficiently strong (relevant) bottom-up inputs. Were this not the case, we would hallucinate events that are not really there by mere top-down expectation. Top-down expectations therefore do not activate, only modulate representations, as discussed here above in 3.4. Top-down representation matching is a mechanism for the selective matching bottom-up short-term or working memory representations to already stored and consolidated (learnt) memory representations (Fig. 2). Subliminal bottom-up representations may become supraliminal when bottom-up signals or representations are sufficiently relevant at a given moment in time to activate statistically significant top-down matching signals [60]. These would then temporally match the bottom-up representations (coincidence). A positive match confirms and amplifies ongoing bottom-up representation, whereas a negative match invalidates ongoing bottom-up representation. Top-down matching is a selective process where subliminal representations become embedded in long-term memory structures and temporarily accessible to recall, i.e. a conscious experience of remembering.
Figure 2: In the ART matching rules, bottom-up signals from the environment activate short-term memory representations in working memory which then, in turn, send bottom-up signals towards a subsequent processing stage at which long-term memory representations are temporarily activated (top left). These bottom-up signals are multiplied by learned long-term memory traces which selectively filter short-term representations and activate top-down expectation signals (top right) that are matched against the selected representations in working
memory. The strength of the matches determines the weighting of short-term representations (bottom) after top-down matching.
### Temporary representation for selection and control
Grossberg's universal coding rules produce temporary and long-term brain signatures of perceptual experience. They address what he called the attention-pre-attention interface problem [9, 12, 17, 19] by allowing pre-attentive (bottom-up) processes to use some of the same circuitry that is used by attentive (top-down) processes to stabilize cortical development and learning. Consistently, research on human cognition [61] has confirmed that attention ensures the selection of contents in working memory, controlled by mechanisms of filtering out irrelevant stimuli and removing no-longer relevant representations, while working memory contributes to controlling perceptual attention as well as action by holding templates available for perceptual selection and action sets available to implement current goals [61]. Top-down matching in its most general sense generates feed-back resonances between bottom-up and top-down signals to rapidly integrate brain representations and hold them available for a consciousness experience at a given moment in time. Non-conscious semantic priming is explained on these grounds. Statistically significant positive top-down matching signals produced on the basis of strong signal coincidences explain why subliminal visual representations become conscious when presented in a specific context, especially after a certain amount of visual learning or practice [60]. Conversely, significant negative matches produced on the basis of repeated discrepancies generating strong negative coincidence signals could explain why a current conscious representation is suppressed and replaced by a new one when a neutral conscious representation is progressively and consistently weakened by association with a strongly biased representation, as in evaluative conditioning and contingency learning [57, 58]. Some of the above mentioned functional properties require long-range connectivity in cortical circuits capable of generating what Edelman [62] called "reentrant signaling". Bottom-up representations that activate specific structures of such circuits, but do not produce sufficiently strong matches to long-term memory signals, will remain non-conscious [60]. Strong positive top-down matching of selected representations will compete with weaker or negative matches and, ultimately, be suppressed from conscious experience like, for example, in cases where the conscious integration of new input interferes with the conscious processing of anything else [35, 50]. Specific instructions telling subjects what to look for, or what to attend to, in a visual scene may generate top-down expectation signals strong enough to inhibit matching of other relevant signals at the same moment in time [31]. Top-down matching generates neural computations of event coincidence [63]. Results from certain observations in motor behavior without awareness [64] highlight potential implications of negative top-down matching for conscious control in learning. Individuals may become aware of unconsciously pursued goals of a motor performance or action when the latter does not progress well, or fails. This could reflect the consequence of repeated negative top-down matching of the non-conscious bottom-up goal representation and top-down expectation signals in terms of either memory traces of previous success, or representations of desired outcome. Repeated and sufficiently strong negative matching signals might thereby trigger instant consciousness of important discrepancies between expectancy and reality [65]. Awake mammals can switch between alert and non-alert brain states hundreds of times every day. The effects of alertness on two cell classes in layer 4 of primary visual cortex, excitatory "simple" cells and fast-spike inhibitory neurons, show that for both cell classes, alertness increases their functional (excitatory or suppressive) strength, and considerably enhances the reliability of visual responses [66]. In simple cells, alertness increases the temporal frequency bandwidth, but preserves contrast sensitivity, orientation tuning, and selectivity for direction and spatial frequency. Alertness selectively suppresses the simple cell responses to high-contrast stimuli and stimulus motion orthogonal to their preferred direction of movement. This kind of conscious feed-back control fulfills an important adaptive function, and has evolved in response to the pressures of intrinsically ambiguous and steadily changing physical environments. The mathematical development and equations describing
ART resonant learning it its most generic form were made explicit in the Cohen-Grossberg model [67, 68], which will be detailed further here below with respect to the development of adaptive intelligence in robotics.
## 4 Towards adaptive intelligence in robotics
Resonant brain states are a key concept of ART. They arise from the self-organizing principles of biological neural learning whereby our brains autonomously adapt to a changing world. Biological neural learning, unlike the learning algorithms that fuel Artificial Intelligence, is driven by evolution, with a remarkable pressure towards increasingly higher levels consciousness across the phylogenesis [69]. Pressure towards the development of increasingly autonomous and adaptively intelligent forms of agency also exists in the growing field of robotics, in particular neurorobotics [70]. Detailed descriptions and equations describing the full span of potential for the development of autonomously intelligent robots may be found in [71, 72, 73, 74]. The most generic functional principles of ART are aimed at was been termed the hierarchical resolution of uncertainty. Hierarchical resolution of uncertainty means that multiple processing stages are needed for brains to generate sufficiently complete, context-sensitive, and stable perceptual representations upon for successful action by intelligent agents. The mathematical development and equations describing ART resonant learning it its most generic form are inspired by the principles of Hebbian neural (synaptic) learning, and are given by the Cohen-Grossberg model [67, 68]. The latter is defined in terms of the following system of nonlinear differential equations describing interactions in time \(t\) among and between neural activities \(xi\), or short-term memory (STM) traces, of any finite number of individual neurons or neuronal populations (networks)
\[dxi/dt\ =\ ai(xi)\ [bi(xi)-\ \sum j\ cijdj(xj)] \tag{1}\]
with symmetric interaction coefficients \(cij=cji\) for weak assumptions of state-dependent non-negative amplification functions \(ai(xi)\), self-signaling functions \(bi(xi)\), and competitive interaction functions \(dj(xj)\). Magnitudes for \(i\), \(j=1\), \(2\), \(\cdot\), \(n\) and \(n\) may be chosen arbitrarily. Each population in (1) can have its own functions \(ai(xi)\), \(bi(xi)\), and \(dj(xj)\). One possible physical interpretation of the symmetric interaction coefficients \(cij=cji\) is that the competitive interactions depend upon Euclidean distances between the populations. Defined as in (1), the \(i\)th population activity \(x\) can only grow to become momentarily a "winner" of the competition at times \(t\) where the competitive balance \([bi(xi)-j\ cijdj(xj)]>0\). When \([bi(xi)-j\ cijdj(xj)]<0\), the given population is "losing" the competition. The ART-inspired neural network architecture for multiple event coding, represented schematically here above, can be implemented by exploiting properties and parameters of the system described in (1). This would permit implementing robot intelligence with capacities beyond reactive behavior. The selective filtering of relevant sensory input from a multitude of external inputs, and to autonomously generate adaptive sequences of memory steps to identify and recognize specific visual objects in the environment, permits to control external perturbations acting on a robot-object system. This is possible in a system like the one illustrated here above on the sole basis of internal dynamics of the resonant network. The ability to correctly identify objects despite multiple changes across time is a competence required in many engineering applications that interact with the real world such as robot navigation. Combining information from different sensory sources promotes robustness and accuracy of place recognition. However, mismatch in data registration, dimensionality, and timing between modalities remain challenging problems in multisensory place recognition [75]. We may, as ART stipulates, define intelligence as the ability to efficiently interact with the environment and to plan for adequate behavior based on the correct interpretation of sensory signals and internal states.
This means that an intelligent agent or robot will be successful in accomplishing its goals, able to learn and predict the effects of its actions, and to continuously adapt to changes in real-world scenarios. Ultimately, embodied intelligence allows a robot to interact swiftly with the environment in a wide range of conditions and tasks [76]. The ART model made explicit here above in (1), a Hebbian-learning based and mathematically parsimonious system of non-linear equations, can be directly implemented to enable intelligent multi-event coding across time \(t\) (Fig.3) for robot control by adaptive artificial intelligence (neurons or neural populations).
## 5 Discussion
Grossberg's universal coding rules enable learning in non-stationary unexpected world, while classic machine learning approaches assume a predictable and controlled world [2]. Unlike passive adaptive filters [77], they enable self-organized unsupervised learning akin to biological synaptic learning [2, 4, 5, 8, 55]. The ART matching rules actively focus attention to selectively generate short and long-term brain signatures of critical features in the environment, which is achieved by dynamic, non-passive, steadily updated synaptic weight changes in the neural networks [9, 12, 17, 19]. The top-down control of selective processing involves activation of all memory traces to match or mismatch bottom-up representations globally using _winner-takes-all_ best- match criteria. Neural network architectures driven by the ART matching rules do not need labeled data to learn, as previously explained in [2]. In short, the Grossberg code overcomes many of the computational problems of back propagation and Deep Learning models. Equipping cognitive robots with artificial intelligence that processes and integrate cross-modal information according to such self-organized contextual learning ensures that they will interact with the environment more efficiently, in particular under conditions of sensory uncertainty [4, 78]. The universal ART matching
Figure 3: ART-inspired Neural Network architecture for adaptively intelligent event coding across time \(t\).
rules are directly relevant to a particular field in robotics that is motivated by human cognitive and behavioral development, i.e. developmental robotics. The goal is to probe developmental or environmental aspects of cognitive processes by exploring robotic capabilities for interaction using artificial sensory systems, and autonomous motor capabilities in challenging environmental platforms [79]. As illustrated here in this paper, low-level sensory and high-level neural networks interact in a bottom-up and top-down manner to create coherent perceptual representations of multisensory environments. Similarly, bottom-up and top-down interactions for the integration of multiple sensory input streams play a crucial role in the development of autonomous cognitive robots by endowing agents with improved robustness, flexibility, and performance. In cases of ambiguous or incongruent cross-sensory inputs, for example, biological inspiration acquires a major role. Autonomous robots with odor-guided navigation [80] can benefit from multisensory processing capabilities similar to that found in animals, allowing them to reliably discriminate between chemical sources by integrating associated auditory and visual information. Cross-modal interaction with top-down matching can enable the autonomous learning of desired motion sequences [81] matching expected outcomes from audio or video sequences, for example. Approaches to multisensory fusion in robotic systems directly inspired by the distributed functional architecture of the mammalian cortex have existed for some time [82]. Biological inspiration exploiting top-down cross-modal processing is mandatory for autonomous cognitive robots that acquire perceptual representations on the basis of active object exploration and groping. By actively processing geometric objet information during motor learning, aided by tactile and visual sensors, it becomes possible to reconstruct the shape, relative position, and orientation of objects. Service robotics is a fast-developing sector that requires embedded intelligence into robotic platforms that interact with humans and the surrounding environment. One of the main challenges in this field is robust and versatile manipulation in everyday life. Embedding anthropomorphic synergies into the gripper mechanical design [83] helps, but autonomous grasping still represents a challenge, which can be resolved by endowing robots with self-organizing multisensory adaptive capabilities, as discussed here above. Combining biological neural network learning with compliant end-effectors would not only permit optimizing the grasping of known deformable objects [84], but also help intelligent robots anticipate and grasp unforeseen objects. Bottom-up activation combined with top-down control gives robots the capability to progressively learn in an ever-changing multisensory environment by means of self-organizing interaction with the environment (Fig. 4). Implementing multisensory memories in robotics in such a way permits equipping intelligent agents with sensory-cognitive adaptive functions that enable the agents to cope with the unexpected in complex and dynamic environments [85]. Lack of multisensory perceptive capabilities, on the other hand, compromises continuous learning of robotic systems because internal models of the multisensory world can then not be acquired and adapted throughout development.
Adaptive resonance is a powerful concept that provides model approaches for a multitude of human interactions. The relationship between the physical mechanism of resonance and its biological significance in the genesis of perceptual experience in neural networks across all species, from mollus to humans, makes it also a powerful concept for human-robot interaction, at all functional levels and within a wider cultural and scientific context. Resonant brain states, established on the basis of matching processes involving top-down expectation and bottom-up activation signals, drive all biological learning at lower and higher levels. Learning in biological neural networks is by nature unsupervised and best accounted for in terms of competitive _winner-takes-all_ matching principles [86,87,88]. A resonant state is predicted to persist long enough and at a high enough activity level to activate long-term signatures of perceptual experience n dedicated neural networks. This explains how these signatures can regulate the brain's fast information processing, observed at the millisecond level, without any awareness of the signals that are being processed. Through resonance as a mediating event, the combination of universal matching rules and their attention focusing properties, learning and responding to arbitrary input environments becomes stable. In the mammalian brain, such stability may be reflected by the ubiquitous occurrence of reciprocal bottom-up and top-down cortico-cortical and cortico-thalamic interactions [89].
## 6 Conclusions
Well before contextual modulation and context-sensitive mechanisms were identified in neural circuits of different species, Grossberg had understood that they have to exist. The principles of unsupervised synaptic (Hebbian) learning had been demonstrated in low-level species such as _aplysia_, pointing towards universal principles of perceptual coding. In his earliest work on adaptive resonance, Grossberg set the foundations of universal functional principles of neural network learning for the generation of brain traces of perceptual experience, and their activation by context-sensitive, dynamic, self-organizing mechanisms producing resonant brain states. Equipping cognitive robots with artificial intelligence based on adaptive resonance, processing and integrating cross-modal information in self-organized contextual learning, will produce intelligent robots that interact with complex environments adaptively and efficiently, in particular under conditions of sensory uncertainty.
All data and conceptual work discussed here are available in the material cited
This research received no external funding.
Material support from the CNRS is gratefully acknowledged
The author declares no conflict of interest.
|
2307.02284 | Absorbing Phase Transitions in Artificial Deep Neural Networks | Theoretical understanding of the behavior of infinitely-wide neural networks
has been rapidly developed for various architectures due to the celebrated
mean-field theory. However, there is a lack of a clear, intuitive framework for
extending our understanding to finite networks that are of more practical and
realistic importance. In the present contribution, we demonstrate that the
behavior of properly initialized neural networks can be understood in terms of
universal critical phenomena in absorbing phase transitions. More specifically,
we study the order-to-chaos transition in the fully-connected feedforward
neural networks and the convolutional ones to show that (i) there is a
well-defined transition from the ordered state to the chaotics state even for
the finite networks, and (ii) difference in architecture is reflected in that
of the universality class of the transition. Remarkably, the finite-size
scaling can also be successfully applied, indicating that intuitive
phenomenological argument could lead us to semi-quantitative description of the
signal propagation dynamics. | Keiichi Tamai, Tsuyoshi Okubo, Truong Vinh Truong Duy, Naotake Natori, Synge Todo | 2023-07-05T13:39:02Z | http://arxiv.org/abs/2307.02284v1 | # Absorbing Phase Transitions
###### Abstract
Theoretical understanding of the behavior of infinitely-wide neural networks has been rapidly developed for various architectures due to the celebrated mean-field theory. However, there is a lack of a clear, intuitive framework for extending our understanding to finite networks that are of more practical and realistic importance. In the present contribution, we demonstrate that the behavior of properly initialized neural networks can be understood in terms of universal critical phenomena in absorbing phase transitions. More specifically, we study the order-to-chaos transition in the fully-connected feedforward neural networks and the convolutional ones to show that (i) there is a well-defined transition from the ordered state to the chaotic state even for the finite networks, and (ii) difference in architecture is reflected in that of the universality class of the transition. Remarkably, the finite-size scaling can also be successfully applied, indicating that intuitive phenomenological argument could lead us to semi-quantitative description of the signal propagation dynamics.
## 1 Introduction
The 21st century has witnessed the tremendous success of deep learning applications. Properly trained deep neural networks have successfully demonstrated performance comparable with, or even superior to, that of human experts in various tasks, a few remarkable examples being the game of Go [1], image synthesis [2], and natural language processing [3]. Boosted by an exciting discovery of the so-called neural network scaling laws [4; 5], the fre
operations. This suggests that placing and comparing artificial neural networks in a broader context of biological neural networks on an equal footing, at least from a functional perspective, is promising for developing their understanding.
The notion of _criticality_ is the key to linking biological and artificial neural networks. Systems at a particular condition (e.g. at the critical point of second-order phase transitions) exhibit anomalous behavior, referred to as _critical phenomena_. They are universal in the sense that microscopically diverse systems can be described by a single mathematical model as long as the essential properties remain unchanged.
The critical phenomena of particular interest in neuroscience are those of _absorbing phase transitions_[7, 8]: transitions to a state from which a system cannot escape (hereafter referred to as "an absorbing state"). Besides the obvious analogy with brains without any neuronal activity (i.e. death), absorbing phase transitions are considered to be one of the essential ingredients for self-organized criticality [9], by which the systems can be automatically tuned to the critical point. Recent theoretical and experimental studies support the view that the brains may operate near the critical point (albeit in a slightly nuanced manner), and the universal scaling law in the critical phenomena of absorbing phase transition has been attracting considerable interest among the community; interested readers are referred to, for example, the recent review by Girardi-Schappo [10].
As a matter of fact, the deep learning research community is also familiar (albeit implicitly) with the notion of criticality. In theoretical studies on deep neural networks, the concept of _the edge of chaos_ has played a considerable role. While the discovery of chaos in random neural networks dates back to (at least) as early as the late 1980s [11], the concept has attracted recent interest among the community when Poole _et al_. theoretically demonstrated that infinitely-wide deep neural networks also exhibit the order-to-chaos transition [12]. Remarkably, at the onset of chaos, _trainable_ depth of the networks is suggested to diverge [13], which is reminiscent of the divergence of the correlation length at the critical point of second-order phase transitions at equilibrium. Furthermore, recent work has successfully applied the renormalization group method to classify the order-to-chaos transitions in the fully-connected feedforward neural networks for various activation functions into a small number of universality classes [14].
Nevertheless, we argue that the notion of criticality has not been fully exploited in studies of artificial deep neural networks. As also discussed by Hesse and Gross [15], bottom-up approaches (in which one derives macroscopic properties from microscopic theories) and top-down ones (in which one starts from phenomenological observations or some heuristics to deduce macroscopic properties) are complementary to each other for studying complicated systems. Numerous works, including those cited in the previous paragraph, have successfully adopted one of the bottom-up approaches for a specific architecture and/or an activation function. However, the situation with regard to the top-down approaches is less satisfactory. Since the universality of the critical phenomena enables the classification of the systems into a reduced number of universality classes based on their fundamental properties, taking full advantage of it would lead us to intuitive and yet powerful understanding of the behavior of deep neural networks across different architectures.
Given all these observations, the purpose of the present work is to demonstrate that the notion of absorbing phase transition is a promising tool for theoretical understanding of the deep neural networks. First, we establish an analogy between the aforementioned order-to-chaos transition and an absorbing phase transition by studying the linear stability of the ordered state. In the framework of the mean-field theory of signal propagation in deep neural networks [12], the critical point is characterized by loss of linear stability of the fixed point corresponding to the ordered phase. We extend the analysis to the networks of finite width, and we directly see that the transition to chaos in artificial deep neural networks is an emergent property of the networks which requires the participation of sufficiently many neurons (and thus more appropriately seen as a phase transition, rather than a mere bifurcation in dynamical systems).
Next, we show that the order-to-chaos transitions in initialized artificial deep neural networks exhibit the universal scaling laws of absorbing phase transition. Actually it is fairly straightforward to find the scaling exponents associated with the transition in the framework of the mean-field theory (or equivalently in the infinitely-wide networks) for the fully-connected feedforward neural networks [13], but it is not clear how we can extend the analysis into the networks of finite width or a different architecture. Our empirical study reveals that the idea of the universal scaling can still be successfully applied to such cases. We also provide an intuitive way to understand the resulting universality class
for each architecture, based on a phenomenological theory. Remarkably, the finite-size scaling can also be successfully applied, indicating that intuitive phenomenological argument could lead us to semi-quantitative description of the signal propagation dynamics in the finite networks.
To summarize, we believe that the this work places the order-to-chaos transition in the initialized artificial deep neural networks in the broader context of absorbing phase transitions, and serves as the first step toward the systematic comparison between natural/biological and artificial neural networks.
## 2 Preliminaries
In this work, we illustrate our view with the following deep neural networks:
* **FC**: A fully-connected feedforward neural network of width \(n\) and depth \(L\). We assume the same width for all the hidden layers, although the size \(n_{0}\) of the input needs not be equal to \(n\). The weight matrices \(W^{(l)}\)\((l=1,2,\cdots,L)\) and bias vectors \(\boldsymbol{b}^{(l)}\) are initialized according to normal distribution, respectively \(\mathcal{N}(0,\sigma_{w}^{2}/n)\) and \(\mathcal{N}(0,\sigma_{b}^{2})\).
* **Conv**: A vanilla \(d\)-dimensional convolutional neural network (having \(c\) channels) of width \(n\) and depth \(L\), although we mostly deal with the case \(d=1\). The similar assumption as FC is also applicable for Conv. The convolutional filters \(w^{(l;j,m)}\) of width \(k\) (for each dimension) and bias vectors \(\boldsymbol{b}^{(l;j)}\) are initialized according respectively to \(\mathcal{N}(0,\sigma_{w}^{2}/(ck^{d}))\) and \(\mathcal{N}(0,\sigma_{w}^{2}/(ck^{d}))\). The so-called circular padding is applied.
Formally, the recurrence relations for the precativation (\(\boldsymbol{z}^{(l)}\) for FC and \(\boldsymbol{z}^{(l;\alpha)}\) for Conv) are respectively written as follows:
\[z_{i}^{(l+1)}=\sum_{j}W_{ij}^{(l+1)}h(z_{j}^{(l)})+b_{i}^{(l+1)}, \tag{1}\]
\[z_{i}^{(l+1;\alpha)}=\sum_{\begin{subarray}{c}j\in ker,\\ m\in chn\end{subarray}}w_{j+\frac{k+1}{2}}^{(l+1;\alpha,m)}h(z_{i+j}^{(l;m)})+ b_{i}^{(l+1;\alpha)}, \tag{2}\]
where \(ker=\{-(k-1)/2,\cdots,-1,0,1,\cdots,(k-1)/2\}\), \(chn=\{1,2,\cdots,c\}\), and the subscripts for \(z\) larger than \(n\) is understood as the remainder when divided by \(n\) whereas smaller than \(1\) as the addition to \(n\) (due to the circular padding). The activation function is assumed to be \(h(x)=\tanh x\) unless otherwise stated, but we expect essentially same results to hold within a fairly large class of functions1.
Footnote 1: More specifically, functions within the \(K^{*}=0\) universality class in the sense of Roberts _et al._[14], such as \(\mathrm{erf},\sin\). Note, however, that the notion of ‘the edge of chaos’ is still valid even outside this universality class such as ReLU [16], although the detailed investigations on how the present argument is modified in such settings are beyond the scope of this work.
These initialized neural networks are known to exhibit order-to-chaos transition in the limit of infinitely wide network (for FC [12]) or infinitely many channels (for Conv [17]), as depicted in Fig. 1(a). Deep networks return almost same output for any inputs in the ordered phase, whereas correlation between similar inputs is lost in the chaotic phase. In either case, the deep networks "forget" what they were given, which is very likely to be disadvantageous for machine learning tasks. Presumably this is the central reason why the phase boundary, also known as _the edge of chaos_, has attracted considerable interest in the literature. As a matter of fact, recent studies have theoretically demonstrated that initialization of the network (in particular at the edge of chaos) is linked to practically important issues in deep learning: the problem of vanishing or exploding gradients [13], the dilemma between trainability and generalizability [18], to name only a few examples.
A clarification comment is in order before we proceed: the two technical terms, namely _the ordered state_ and _the ordered phase_ are not to be confused with each other. Hereafter, the former technical term refers to the state where the two preactivations \(\boldsymbol{z}_{1}^{(l)},\boldsymbol{z}_{2}^{(l)}\) corresponding to generally different inputs \(\boldsymbol{x}_{1},\boldsymbol{x}_{2}\) are identical, whereas the latter to the region in the phase space \((\sigma_{w},\sigma_{b})\) where \(\boldsymbol{z}_{1}\) and \(\boldsymbol{z}_{2}\) almost surely become arbitrarily close to each other in the infinitely deep limit. For example, even if the combination of the hyperparameters \((\sigma_{w},\sigma_{b})\) are not in the ordered phase, it is possible that a pair of preactivations \(\boldsymbol{z}_{1},\boldsymbol{z}_{2}\) reaches to the ordered state, depending on the inputs \(\boldsymbol{x}_{1},\boldsymbol{x}_{2}\) and specific realizations of the weights and biases.
## 3 Absorbing property of the ordered state
Now let us establish an analogy between the ordered state and an absorbing state. Clearly, the ordered state is a fixed point of the signal propagation dynamics for any \(\sigma_{w}\) once the weight matrices and the bias vectors are initialized. It is also clear, however, that the ordered state is almost never achieved accidentally: That is, if \(\mathbf{z}_{1}^{(l)}\neq\mathbf{z}_{2}^{(l)}\) for some \(l\), the probability that \(\mathbf{z}_{1}^{(l+1)}=\mathbf{z}_{2}^{(l+1)}\) is zero for that \(l\). Hence a more relevant question is whether the ordered state is stable against infinitesimal disturbance, at least for some \(\sigma_{w}\).
To address the issue of the linear stability of the ordered state, we study the maximum Lyapunov exponent2 for the front propagation dynamics (1), (2) (we only display the definition for FC for convenience; extending it to Conv is straightforward):
Footnote 2: Here the notation is slightly abused, as is also done in the literature [19].
\[\lambda_{1}:=\lim_{l\to\infty}\frac{1}{l}\log\frac{\|J^{(l)}(\mathbf{z}^{(l)}) \cdots J^{(1)}(\mathbf{z}^{(1)})\mathbf{u}_{0}\|}{\|\mathbf{u}_{0}\|}, \tag{3}\]
where \(\mathbf{u}_{0}\in\mathbb{R}^{n}\) is an arbitrary nonzero vector and \(J^{(l)}\) is the layer-wise input-output Jacobian
\[J^{(l)}(\mathbf{z})=\left(\begin{array}{ccc}J_{11}^{(l)}(\mathbf{z})&\cdots&J_{1n}^ {(l)}(\mathbf{z})\\ \vdots&\ddots&\vdots\\ J_{n1}^{(l)}(\mathbf{z})&\cdots&J_{nn}^{(l)}(\mathbf{z})\end{array}\right)\quad\mathrm{ with}\quad J_{ij}^{(l)}(\mathbf{z}):=W_{ij}^{(l)}h^{\prime}(z_{j}). \tag{4}\]
By doing so we can directly see how the notion of the order-to-chaos transition emerges as a many-body effect in the neural networks; see the numerical results3 in Fig. 1(b). In the case where the
Figure 1: Absorbing phase transitions in deep neural networks. (a) Phase diagram of signal propagation in FC and Conv (see text for their formal definition). The solid curve indicates the phase boundary as derived from their respective mean-field theory [12, 17], identical with each other. (b) The maximum generalized Lyapunov exponent \(\lambda_{1}\) (see Eq. (3)) in FC as a function of the weight initialization \(\sigma_{w}\), numerically calculated for various width \(n\) (1 (purple), 2 (green), 9 (light blue), 20 (orange) and 50 (red)). (c) Similar with (b), but now with Conv for various number of channels \(c\) (from 5 (yellow) to 50 (brown)). The width \(n,k\) of the network and the convolutional filters are respectively fixed to 50 and 3. The standard deviation \(\sigma_{b}\) for the bias vectors is fixed to be 0.3 (for FC) and \(\sqrt{20}\times 10^{-3}\) (for Conv).
hidden layer consists of only a small number \(n\) of neurons (say \(n\lesssim 10\) for FC), the maximum Lyapunov exponent \(\lambda_{1}\) as a function of the weight initialization \(\sigma_{w}\) is negative in the entire domain, which suggests that the ordered state is always stable against infinitesimal discrepancy. However, \(\lambda_{1}\) increases as \(n\) becomes larger, and eventually, \(\lambda_{1}\) changes its sign at some \(\sigma_{w}\) for large \(n\), indicating loss of the linear stability. Naturally, the position of the onset of the linear instability is very close to that of the critical point predicted from the mean-field theory [12] when \(n\) is large, and is expected to coincide in the limit of \(n\to\infty\).
Thus, the maximum Lyapunov exponent \(\lambda_{1}\) successfully captures the well-defined transition from the ordered phase to the chaotic phase even for finite networks. In the ordered phase, once a pair of preactivations \((\mathbf{z}_{1},\mathbf{z}_{2})\) reaches reasonably close to the ordered state, it is hard to escape from it. Meanwhile, in the chaotic phase, a pair of preactivations are allowed to get away from the vicinity of the ordered state, although the ordered state itself is still absorbing. This scenario, a transition from a non-fluctuating absorbing phase to a fluctuating active phase, is highly reminiscent of an absorbing phase transition in statistical mechanics.
The similar scenario also holds for Conv, but the qualitative difference from FC in the behavior of the maximum generalized Lyapunov exponent \(\lambda_{1}\) in the vicinity of the critical point calls for further discussion. In Conv with fixed width \(n\), we numerically observe that \(\lambda_{1}\) increases as the number \(c\) of channels do so in the ordered phase, whereas it decreases in the chaotic phase. This is in sharp contrast with FC, where \(\lambda_{1}\) increases as \(n\) do so regardless of the phase. This tendency suggests that, in the limit of \(c\to\infty\), the derivative of \(\lambda_{1}\) with respect to \(\sigma_{w}\) vanishes at the critical point, and therefore the characteristic depth for the transition diverges faster than the reciprocal of the deviation \(|\sigma_{w}-\sigma_{w;c}|\) from the critical point. Later we will provide additional evidence that the correlation depth indeed diverges faster than \(|\sigma_{w}-\sigma_{w;c}|^{-1}\).
## 4 Universal scaling around the order-to-chaos transition
Having seen that the order-to-chaos transition is at least conceptually analogous to absorbing phase transitions, the next step is to seek the deeper connection between these two by further quantitative characterization. One of the most common strategies for studying systems with absorbing phase transition is to examine universal scaling laws [7; 8]. Systems with a continuous transition to an absorbing phase can be characterized by power-law behavior for various quantities. For example, an order parameter (a quantity that vanishes in an absorbing phase whereas remaining positive otherwise) \(\rho\) and correlation time \(\xi_{\parallel}\) for the statistically steady state respectively exhibit power-law onset and divergence with some suitable exponents
\[\rho\sim\tau^{\beta},\quad\xi_{\parallel}\sim|\tau|^{-\nu_{\parallel}} \tag{5}\]
in the vicinity of the critical point, where \(\tau\) denotes the deviation from the critical point (we define4 it to be \(\tau:=\sigma_{w}-\sigma_{w;c}\) in the present work, where \(\sigma_{w;c}\) is the weight initialization parameter \(\sigma_{w}\) at the critical point), and \(\beta,\nu_{\parallel}\) are the exponents associated with the power-law scaling. Moreover, the exponents, hereafter referred to as the _critical exponents_, are universal in the sense that they are believed to depend only on fundamental properties of the system, such as spatial dimensionality and symmetry, giving rise to the concept of the _universality classes_. The complexity of the underlying first-principle theory is not necessarily a problem for the universal scaling; even the transition between two topologically different turbulent states of electrohydrodynamic convection in liquid crystals, of which one cannot hope to construct the comprehensible first-principle theory, has been demonstrated to exhibit clear universal scaling laws [20; 21] with the critical exponents identical with the contact process [22], a massively simplified stochastic model for population growth. Thus, the critical exponents are expected to provide a keen insight into _a priori_ complex systems.
Footnote 4: The choice of how to quantify the discrepancy is somewhat arbitrary; such details generally do not affect the estimate of the critical exponents.
Some preparations are in order before we proceed:
* In the present context, the depth \(l\) of the hidden layer can be regarded as time, because the signal propagates sequentially across the layers and yet simultaneously within a layer. Hereafter, the neural-network counterpart for the correlation time will be referred to as the correlation depth.
* A natural candidate for the order parameter \(\rho\) in the present context, which we will use in the following, is the Pearson correlation coefficient between preactivations for different inputs, subtracted from unity so that \(\rho\) vanishes in the ordered state: \[\rho^{(l)}[\sigma_{w};n]:=1-\frac{\sum_{i}(z_{1;i}^{(l)}-Z_{1}^{(l)})(z_{2;i}^{ (l)}-Z_{2}^{(l)})}{\sqrt{\sum_{i}(z_{1;i}^{(l)}-Z_{1}^{(l)})^{2}\sum_{i}(z_{2;i} ^{(l)}-Z_{2}^{(l)})^{2}}},\] (6) where \(\boldsymbol{z}_{1}^{(l)},\boldsymbol{z}_{2}^{(l)}\in\mathbb{R}^{n}\) is the preactivation at the \(l\)th hidden layer for different inputs \(\boldsymbol{x}_{1},\boldsymbol{x}_{2}\), \(z_{j;i}^{(l)}\) the \(i\)th element of \(\boldsymbol{z}_{j}^{(l)}\) and \(Z_{j}^{(l)}:=\frac{1}{n}\sum_{i}z_{j;i}^{(l)}\). In the case of Conv where we have multiple channels, \(\rho^{(l)}\) is obtained by first calculating the correlation coefficient (6) for each channel and then taking the average over all the channels.
* While one could measure the critical exponents directly from the Eq. (5) (although one still has to formally define the correlation depth \(\xi_{\parallel}\)), the more informative approach we employ here is to examine the dynamical scaling, where we study the scaling properties of \(\rho\) as a function of the depth \(l\), not only that in the infinitely-deep limit. In the framework of phenomenological scaling theory described in Appendix A, one can see that \(\rho\) is expected to follow the universal scaling ansatz below (here we recall \(\tau:=\sigma_{w}-\sigma_{w;c}\)): \[\lim_{n\to\infty}\rho^{(l)}[\sigma_{w};n]\simeq l^{-\beta/\nu_{\parallel}}f( \tau l^{1/\nu_{\parallel}}).\] (7) Now let us demonstrate the utility of the aforementioned phenomenological scaling theory with FC, where much of the critical properties can be studied in a rigorous manner. In the case of FC, the critical exponents \(\beta,\nu_{\parallel}\) can be analytically derived as a fairly straightforward (albeit a bit tedious) extension of the theoretical analysis by Schoenholz _et al._[13]: That is, we consider infinitesimally small deviation \(\delta\sigma_{w}\) from the critical point \(\sigma_{w;c}\) and expand the mean-field theory [12] to track the change of the position of the fixed point and of the characteristic depth (\(\xi_{c}\) in Ref. [13]) up to the first-order of \(\delta\sigma_{w}\) (whereas change of the infinitesimal deviation from the fixed point with respect to depth for arbitrary \(\sigma_{w}\) was studied in the preceding literature [13]). We leave the detailed derivation to Appendix B, and merely quote the final results: \[\beta_{\rm FC}=1,\quad\nu_{\parallel{\rm FC}}=1.\] (8) Naturally, the above scaling exponents can be empirically validated by checking the data collapse expected from Eq. (7), as we show in Fig. 2(a).
Besides the analytical treatment, it is worthwhile to note that heuristics are also available for quickly understanding some aspects of the results. In the vicinity of the ordered state (\(\rho=0\)), the dynamics of the order parameter \(\rho\) can be described by a linear recurrence relation at the lowest-order approximation, whose coefficient \(\gamma\) is given by Jacobian of the mean-field theory at the fixed point corresponding to the ordered state. However, the linear approximation is not necessarily valid in the entire domain; in particular, one expects saturation of \(\rho\) due to the bounded nature of the activation function (note also that, for the present definition (6) of the order parameter, the range of \(\rho\) is bounded in the first place), which gives rise to a quadratic loss preventing \(\rho\) from diverging to infinity. To sum up, one arrives at the following approximate description for the dynamics of \(\rho\):
\[\frac{\mathrm{d}\rho}{\mathrm{d}l}=\gamma(\tau)\rho-\kappa\rho^{2}, \tag{9}\]
where \(\gamma(\tau)\) and \(\kappa\) are phenomenological parameters (here we emphasized the dependence of \(\gamma\) on the deviation \(\tau\) from the critical point; the sign of \(\gamma\) and \(\tau\) should be the same). The above equation coincides with the mean-field theory for absorbing phase transitions [7, 8], and it admits the universal scaling ansatz (7) with the critical exponents (8). Of course, higher-order corrections are present in reality, but they do not affect the scaling properties of the networks (that is, the corrections are irrelevant in the sense of renormalization group in statistical mechanics).
A real virtue of the phenomenological scaling argument is that it provides us useful intuition even into the networks of finite width, where quantitatively tracking the deviation from the Gaussian process can be cumbersome (if not impossible [23, 24]). To illustrate this point, let us consider the finite-size scaling of FC at the critical point (corresponding to \(\tau=0\) in the above heuristic argument). One can observe that the fourth-order (and other even-order) cumulants come into play in the case of finite
networks, although the third-order (and other odd-order, except the first) ones vanish5 because the activation function is odd. In the spirit of the asymptotic expansion of the probability distribution [25], this observation indicates that the leading correction to the Gaussian process is an order of \(n^{-1}\), the reciprocal of the width. Thus, together with a trivial fact that \(\rho=0\) is an absorbing state also for the finite networks, we are led to the following modified phenomenology:
Footnote 5: (Remarks for the readers familiar with statistical mechanics) This peculiarity explains why the finite size scaling in FC is different from that in the contact process [22] on a complete graph, where one finds the same \(\beta\) and \(\nu_{\parallel}\) (Eq. (8)) but the exponent for finite size scaling (11) is replaced with \(-1/2\). If the third-order cumulant remains non-zero, the leading order for the correction is an order of \(n^{-\frac{1}{2}}\).
\[\frac{\mathrm{d}\rho}{\mathrm{d}l}=-\frac{\lambda}{n}\rho-\kappa\rho^{2}, \tag{10}\]
which admits the finite-size scaling ansatz below:
\[\rho^{(l)}[\sigma_{w}=\sigma_{w;c};n]\simeq n^{-1}f(n^{-1}l). \tag{11}\]
Figure 2: Universal scaling laws in the order-to-chaos transition. (a) The order parameter \(\rho^{(l)}\) (see Eq. (6)) as a function of the depth \(l\) of the hidden layer for various weight initialization \(\sigma_{w}\) (from 1.35 (blue; ordered phase) to 1.45 (magenta; chaotic phase)) in the infinitely-wide FC, as calculated from the numerical solution of the mean-field theory [12]. The inset shows the same data rescaled according to the universal scaling ansatz (7) with critical exponents (8). (b) The order parameter \(\rho^{(l)}\) at the critical point \(\sigma_{w;c}\sim 1.395584\) for various width \(n\) (from 50 (purple) to 400 (orange)), empirically averaged over \(10^{4}\) realizations. Two orthogonal (that is, the dot product of zero) inputs of size \(n_{0}=10\) were given. The inset shows the same data rescaled according to the universal scaling ansatz (11). The black dashed curve indicates the solution of the phenomenology (10), with \(\lambda=0.288,\kappa=0.686\). (c) Similar with (a), but now with the one-dimensional Conv with \(n=100\) and \(c=5\), empirically averaged over \(10^{4}\) realizations. The two inputs \(\mathbf{x}_{1},\mathbf{x}_{2}\) were set to be identical with each other, except a single element to be different by unity. The weight initialization \(\sigma_{w}\) was varied from 1.41 (blue) to 1.45 (magenta). The inset shows the same data rescaled with the critical exponents of \((1+1)\)-dimensional directed percolation (13). \(\sigma_{w;c}=1.4335\) is chosen to find the scaling collapse. (d) Similar with (b), but now with Conv near the critical point (\(\sigma_{w}=1.428\)) for \(n\) from 50 (purple) to 200 (light blue). The black dashed line is a guide-to-eye for \(l^{-1}(=l^{-\beta_{\mathrm{FC}}/\nu_{\mathrm{FC}}})\). The inset shows the same data rescaled according to (11). In both FC and Conv, the standard deviation \(\sigma_{b}\) for bias vectors are fixed to be 0.3.
We empirically checked whether the universal scaling ansatz (11) indeed holds for FC, and the result was affirmative, at least to a good approximation; see the data collapse in Fig. 2(b). It is interesting to note that the phenomenology (10) can be solved analytically to find
\[n\rho^{(l)}=\frac{\rho_{0}\lambda}{\rho_{0}\kappa(e^{\frac{\lambda l}{n}}-1)+( \lambda/n)e^{\frac{\lambda l}{n}}}. \tag{12}\]
The empirical results for \(\rho^{(l)}\) after the scaling collapse can be fitted reasonably well by (12) with a suitable choice of the parameters \(\rho_{0},\lambda,\kappa\). In particular, one can see that the solution (12) exhibits a crossover from the power-law decay to the exponential one at \(l/n\sim 1/\lambda\), based on which one can judge whether a given deep neural network is exceedingly wide compared to its depth or vice versa. Thus the phenomenological scaling argument serves as a fast track to the recent theoretical idea that the width-to-depth ratio is a more informative quantity for describing the properties of the network than the nominal depth or width is, at least in the case of FC [14].
Next we study Conv to demonstrate that different network structure results in different universality class the network belongs to. Our empirical studies (see Fig. 2(c)) suggest that, like FC, the universal scaling ansatz (7) remains valid for Conv, although we cannot expect clear scaling collapse if \(\sigma_{w}\) is too close to the critical point, due to the so-called finite-size effects. The associated critical exponents, however, are considerably different from FC (Eq. (8)); rather they are close to those of the _directed percolation_ (DP) [26] in \((1+1)\)-dimension (that is, both the preferred direction and the space perpendicular thereto is one-dimensional) [27]:6
Footnote 6: Here, the mathematical symbol \(\sim\) instead of \(=\) is used in Eq. (13). To the best of our knowledge, the directed percolation has not been exactly solved, and hence only the numerically estimated values are available.
\[\beta_{\rm 1DDP}\sim 0.27649,\quad\nu_{\parallel 1{\rm DDP}}\sim 1.73385. \tag{13}\]
Again we argue that, equipped with some prior knowledge in statistical mechanics, the difference between FC and Conv can be understood quite naturally. In the case of FC, a single neuron in a hidden layer is connected to all the neurons in the one layer above. Regarding the depth as time and speaking in physics language, each neuron is effectively in a very high-dimensional space, in which case one typically expects the mean-field scaling. In contrast, the neurons in Conv interact only locally through the convolutional filters, and the mean-field picture does not necessarily apply. In this case, the robustness of the DP universality class is the key; it is conjectured by Janssen and Grassberger [28, 29] that systems exhibiting continuous phase transition into absorbing state without exceptional properties (long-range interaction, higher symmetry, etc.) belong to the DP universality class. Since the exceptional properties seem absent in Conv, it is natural to expect the DP universality, and the results in Fig. 2(c) suggest that this is indeed the case. The discussion presented here indicates that the spatial dimensionality of the network is relevant for describing the signal propagation dynamics in Conv. In passing, we have checked (though Figures are not shown) that the essentially same scenario holds true for the two-dimensional Conv (\(d=2\)), where the critical exponents \(\beta,\nu_{\parallel}\) are replaced [30, 31] with
\[\beta_{\rm 2DDP}\sim 0.58,\quad\nu_{\parallel 2{\rm DDP}}\sim 1.29. \tag{14}\]
The locality of the connections for Conv induces the correlation width \(\xi_{\perp}\) within a hidden layer. The correlation width \(\xi_{\perp}\) for a system within the \((1+1)\)-dimensional DP universality class exhibits power-law divergence with the following critical exponent \(\nu_{\perp 1{\rm DDP}}\):
\[\xi_{\perp}\sim\tau^{-\nu_{\perp 1{\rm DDP}}}\quad{\rm with}\quad\nu_{\perp 1{ \rm DDP}}\sim 1.09685. \tag{15}\]
Importantly, the exponent \(\nu_{\perp}\) for the correlation width does _not_ coincide with the exponent \(\nu_{\parallel}\) for the correlation depth. This fact can be seen as a caution: the most informative combination of the network width \(n\) and the depth \(L\) for describing the behavior of the neural networks is generally nontrivial (one might be tempted to simply use the ratio \(L/n\), and indeed this works for FC, but not necessarily for other architectures). In the case where an intralayer length scale is well-defined (unlike FC), the universal scaling ansatz for the finite-size scaling in the intermediate layer \(l\)
\[\rho^{(l)}[\sigma_{w}=\sigma_{w;c};n]\simeq n^{-\beta/\nu_{\perp}}f(n^{-\nu_{ \parallel}/\nu_{\perp}}l) \tag{16}\]
can be derived within the framework of the phenomenological scaling theory, just as we did for Eq. (7) (see also Appendix A). That is, the most informative combination of the width \(n\) and the depth \(L\) for describing the behavior of the critically initialized deep neural networks may be \(L/n^{\nu_{\parallel}/\nu_{\perp}}\).
Finally, one may wonder whether the DP universality presented here for Conv with a finite number \(c\) of channels is coherently connected to the \(c\to\infty\) limit, where the signal propagation dynamics is reduced to the mean-field theory [17]. In the case of finite \(c\), the phenomenology is similar to that in the diffusive contact process [32]. That is, we expect the existence of the depth scale \(l^{*}\) below which the network effectively exhibits the mean-field scaling. Indeed, in the vicinity of the critical point \(\sigma_{w;c}\), we can observe the power-law decay of the order parameter \(\rho^{(l)}\) in agreement with the mean-field universality class (8) for sufficiently small \(l\) (see Fig. 2(d)). Conversely, we expect the DP scaling at a depth scale larger than \(l^{*}\). The depth scale \(l^{*}\) at which the crossover occurs increases with \(c\), and eventually diverges in the \(c\to\infty\) limit, meaning that the mean-field scaling is fully recovered.
## 5 Discussions
In the present work, we pursued the analogy between the behavior of the classical deep neural networks and absorbing phase transitions. During the pursuit, we performed the linear stability analysis of the ordered-to-chaos transition for the neural networks of finite width or channels and uncovered the universal scaling laws in the signal propagation process in the initialized networks. In the language of absorbing phase transitions, the structural difference between FC and Conv, namely the locality of the coupling between the neurons within a hidden layer, is reflected in the universality class and thereby the value of the critical exponents. Thus we demonstrated the promising potential of heuristic argument for the semi-quantitative description of the deep neural networks.
Let us turn ourselves back to the question of similarities and differences between human brains and the artificial deep neural networks. The present work suggests that, if adequately initialized, even classical deep neural networks utilize the criticality of absorbing phase transitions, just like the brains, at the early stage of the training process. However, it is easy to see experimentally that the weights and biases cease to be at the critical point during the training unless one designs the network to be extremely wide compared to the depth. This suggests that the classical networks equipped with typical optimization schemes do not have the auto-tuning mechanisms toward the criticality. It remains to be an important question whether (and if so, how) such auto-tuning mechanisms are implemented in the state-of-the-art architectures and/or optimization schemes.
We foresee some interesting directions for future work. One of the most natural directions is to extend the present analysis to the backpropagation and to analyze the training dynamics. The neural tangent kernel (NTK) [33] has played a crucial role in the study of the training dynamics in the infinitely-wide deep neural networks, but it is repeatedly argued in the literature that the infinitely-wide limit cannot fully explain the success of deep learning [34; 35]. We are aware that some of the recent works extend the analysis beyond the infinitely-wide limit [36]. It would be interesting to see how the bottom-up approach established in the literature and the top-down one presented here can be merged into a further improved understanding of deep learning. In particular, going beyond the infinitely-many-channel limit for Conv solely by the bottom-up approaches is not very likely to be feasible due to the notorious mathematical difficulty of the directed percolation problem [37]. In this case, we believe an appropriate combination of rigorous analysis and heuristics is necessary to make progress.
Limitations.The attempt to characterize the behavior of artificial deep neural networks in terms of absorbing phase transitions is admittedly at its infancy. The most critical limitation in our opinion is that we only dealt with the classical cases where the signals propagate across the hidden layers in a purely sequential manner. As such, extension of the present analysis to the networks of more practical use is not necessarily straightforward, although the notion of the edge of chaos is still valid in some of these cases [38; 39] and therefore one should not be too pessimistic about the feasibility.7 Note also that implication of the analogy on the learning dynamics has not been thoroughly investigated. Thus this is not the end of the story at all; rather it is only the beginning. Nevertheless, we hope that the this work accelerates the use of recent ideas in statistical mechanics for improving our understanding of deep learning.
Footnote 7: At this point, it may be interesting to point out that physical systems with time-delayed feedback can still be analyzed in the framework of absorbing phase transition [40; 41].
## Acknowledgments and Disclosure of Funding
The numerical experiments for supporting our arguments in this work (producing Fig. 2 in particular) were performed on the cluster machine provided by Institute for Physics of Intelligence, The University of Tokyo. This work was supported by the Center of Innovations for Sustainable Quantum AI (JST Grant Number JPMJPF2221). T.O. and S.T. wish to thank support by the Endowed Project for Quantum Software Research and Education, The University of Tokyo ([https://qsw.phys.s.u-tokyo.ac.jp/](https://qsw.phys.s.u-tokyo.ac.jp/)).
## Appendix A Basics of phenomenological scaling theory
The purpose of this section is to briefly recall the phenomenological scaling theory for non-equilibrium phase transitions, as the readers are not necessarily familiar with statistical mechanics. In the phenomenological scaling theory we employ throughout the paper, we postulate the following two ansatzes:
1. The behavior of systems near a critical point can be characterized by a single correlation length \(\xi_{\perp}\) (if any) and a single correlation time \(\xi_{\parallel}\). These length scales diverge at the critical point.
2. Any measurable quantities which characterize the transition (that is, vanish or diverge at the critical point) exhibit power law scaling with a suitable exponent (often called a _critical exponent_) as we vary the discrepancy from the critical point.
For example, let us consider a measurable quantity \(\rho\) (with the critical exponent \(\beta(>0)\)), which depends on time \(t\) and the (signed) discrepancy \(\tau\) from the critical point. Then, the first ansatz states that \(\rho(t,\tau)\) is a function of \(t/\xi_{\parallel}\), parameterized by \(\tau\):
\[\rho(t,\tau):=R_{\tau}(t/\xi_{\parallel,\tau}). \tag{17}\]
Thus the first ansatz introduces a one-parameter family of functions \(R=\{R_{\tau}:\mathbb{R}\to\mathbb{R}|\tau\in\mathbb{R}\}\). The second ansatz is about the relationship between different members of the one-parameter family. That is, we postulate that the correlation time \(\xi_{\parallel}\) (the critical exponent for the correlation time is conventionally denoted as \(-\nu_{\parallel}\)) and the function \(R_{\tau}\) is scaled respectively by \(\lambda^{-\nu_{\parallel}}\) and \(\lambda^{\beta}\), as the discrepancy \(\tau\) is multiplied by a factor \(\lambda>0\):
\[\xi_{\parallel,\lambda\tau}=\lambda^{-\nu_{\parallel}}\xi_{\parallel,\tau}, \quad R_{\lambda\tau}(x)=\lambda^{\beta}R_{\tau}(x)\ \mathrm{for}\ \forall x\in\mathbb{R}. \tag{18}\]
In order to demonstrate how one can obtain useful formulae from this theoretical framework, let us derive Eq. (7) in the main text. The following equality immediately follows from Eq. (18):
\[R_{\tau}(t/\xi_{\parallel,\tau})=\lambda^{-\beta}R_{\lambda\tau}(\lambda^{- \nu_{\parallel}}t/\xi_{\parallel,\lambda\tau}). \tag{19}\]
By recalling Eq. (17) and substituting \((t/T)^{1/\nu_{\parallel}}\) (where \(T\) is an arbitrary constant having a dimension of time) into \(\lambda\), we find
\[\rho(t,\tau)=(t/T)^{-\beta/\nu_{\parallel}}\rho(T,(t/T)^{1/\nu_{\parallel}}\tau). \tag{20}\]
The above result implies that \(t^{\beta/\nu_{\parallel}}\rho(t,\tau)\) is a function of \(\tau t^{1/\nu_{\parallel}}\)
\[\rho(t,\tau)=t^{-\beta/\nu_{\parallel}}f(\tau t^{1/\nu_{\parallel}}), \tag{21}\]
which can be checked by examining a data collapse, as we have seen in Fig. 2.
If the system has a well-defined length (unlike FC), the phenomenological scaling theory can be extended to study the universal scaling properties within finite system size \(L\). In this case, the first ansatz states that \(\rho(t,\tau,L)\) is a function of \(t/\xi_{\parallel}\) and \(L/\xi_{\perp}\), parameterized by \(\tau\):
\[\rho(t,\tau,L):=R_{\tau}(t/\xi_{\parallel,\tau},L/\xi_{\perp,\tau}) \tag{22}\]
Then, by repeating the same argument as before, we arrive at the following finite-size scaling ansatz:
\[\rho(t,\tau,L)\simeq\lambda^{-\beta}g(\lambda^{-\nu_{\parallel}}t,\lambda \tau,\lambda^{-\nu_{\perp}}L), \tag{23}\]
where \(-\nu_{\perp}\) is the critical exponent for the correlation length, and \(g\) is a suitable scaling function.
An important point here is that the critical exponents are believed to be universal. Systems with continuous phase transitions are classified into a small number of _universality classes_, and systems within the same universality class share the same essential properties and the critical exponents. Hence essential mechanisms behind the transition can be deduced from measurements of the critical exponents. Interested readers are referred to the textbook by Henkel _et al._[7] for further details; alternatively, a preprint of the review article by Hinrichsen [8] is freely available on arXiv.
## Appendix B Derivation of the critical exponents for FC
Here we show the derivation of Eq. (8) in the main text. The starting point is the mean-field theory of the preactivations of FC by Poole _et al._[12], which becomes exact in the limit of infinitely wide network [42, 43]:
\[q^{(l+1)}=\sigma_{w}^{2}\int\mathrm{d}z\frac{1}{\sqrt{2\pi}}e^{-\frac{z^{2}}{ 2}}h^{2}(\sqrt{q^{(l)}}z)+\sigma_{b}^{2}; \tag{24}\]
\[c^{(l+1)}=\frac{1}{\sqrt{q_{1}^{(l)}q_{2}^{(l)}}}\left[\sigma_{w}^{2}\int \mathrm{d}z_{1}\int\mathrm{d}z_{2}\frac{1}{\sqrt{(2\pi)^{2}}}e^{-\frac{z_{1}^{ 2}+z_{2}^{2}}{2}}h(u_{1}^{(l)})h(u_{2}^{(l)})+\sigma_{b}^{2}\right], \tag{25}\]
where \(q^{(l)}\) denotes the variance of the preactivation at the \(l\)th hidden layer (different subscripts correspond to different input), \(c^{(l)}\) the Pearson correlation coefficient of the preactivations for different inputs, and \(u_{1}^{(l)}=\sqrt{q_{1}^{(l)}}z_{1},u_{2}^{(l)}=\sqrt{q_{2}^{(l)}}(c^{(l)}z_{ 1}+\sqrt{1-c^{(l)2}}z_{2})\). One can readily see that the order parameter \(\rho^{(l)}\) defined in the main text (namely Eq. (6)) is related to \(c^{(l)}\) in the limit of infinitely wide network:
\[\rho^{(l)}[\sigma_{w};\infty]:=\lim_{n\to\infty}\rho^{(l)}[\sigma_{w};n]=1-c^{ (l)}. \tag{26}\]
This is a dynamical system with two degrees of freedom, and a single stable fixed point \((q^{*},c^{*})\) exists [12] for given initialization parameters \((\sigma_{w},\sigma_{b})\). In this framework, the ordered (chaotic) phase of the deep neural network is characterized by the linear stability (instability) of the trivial fixed point \((q^{*},1)\). As such, the position of the critical point \(\sigma_{w;c}\) for a given \(\sigma_{b}\) can be determined by solving
\[\sigma_{w;c}^{2}\int\mathrm{d}z\frac{1}{\sqrt{2\pi}}e^{-\frac{z^{2}}{2}}h^{ \prime 2}(\sqrt{q^{*}(\sigma_{w}=\sigma_{w;c})}z)=1. \tag{27}\]
It can be shown that the discrepancy from the fixed point \(c^{*}\) asymptotically decays exponentially with a suitable correlation depth \(\xi_{c}\)[13]:
\[\lim_{l\to\infty}\frac{\log|c^{(l)}-c^{*}|}{l}=-\frac{1}{\xi_{c}} \tag{28}\]
with
\[\xi_{c}^{-1}=\left\{\begin{array}{ll}-\log\left[\sigma_{w}^{2}\int\mathrm{d }z\frac{1}{\sqrt{2\pi}}e^{-\frac{z^{2}}{2}}h^{\prime 2}(\sqrt{q^{*}}z)\right]&\sigma_{w} <\sigma_{w;c}\\ -\log\left[\sigma_{w}^{2}\int\mathrm{d}z_{1}\int\mathrm{d}z_{2}\frac{1}{\sqrt {2\pi}}e^{-\frac{z_{1}^{2}+z_{2}^{2}}{2}}h^{\prime}(u_{1}^{*})h^{\prime}(u_{2} ^{*})\right]&\sigma_{w}>\sigma_{w;c},\end{array}\right. \tag{29}\]
where \(u_{1}^{*}=\sqrt{q^{*}}z_{1}\) and \(u_{2}^{*}=\sqrt{q^{*}}(c^{*}z_{1}+\sqrt{1-c^{*}{}^{2}}z_{2})\).
The two central tasks for proving Eq. (8)
\[\lim_{l\to\infty}\rho^{(l)}\sim(\sigma_{w}-\sigma_{w;c})^{\beta_{\mathrm{FC}} },\;\xi_{c}\sim|\sigma_{w}-\sigma_{w;c}|^{-\nu_{\parallel\mathrm{FC}}}\quad \mathrm{with}\quad\beta_{\mathrm{FC}}=1,\;\nu_{\parallel\mathrm{FC}}=1\]
are the following (although it is fairly easy to see them empirically; see Fig. 3):
1. **For \(\mathbf{\beta}\)**: Prove that \(c^{*}\) as a function of \(\sigma_{w}\) is continuous (but not differentiable) at \(\sigma_{w}=\sigma_{w;c}\). In particular, there exists a one-sided limit \(\zeta>0\) so that \[\lim_{\delta\sigma_{w}\to 0^{+}}\frac{c^{*}(\sigma_{w;c}+\delta\sigma_{w})}{ \delta\sigma_{w}}=-\zeta.\] (30)
2. **For \(\mathbf{\nu}_{\parallel}\)**: Prove that \(\xi_{c}^{-1}\) as a function of \(\sigma_{w}\) approaches linearly to 0 as \(\sigma_{w}\to\sigma_{w;c}\). That is, there exists \(\iota_{1},\iota_{2}\) such that \[\lim_{\delta\sigma_{w}\to 0^{-}}\frac{\xi_{c}^{-1}(\sigma_{w;c}+\delta \sigma_{w})}{\delta\sigma_{w}}=-\iota_{1};\quad\lim_{\delta\sigma_{w}\to 0^{+}} \frac{\xi_{c}^{-1}(\sigma_{w;c}+\delta\sigma_{w})}{\delta\sigma_{w}}=\iota_{2}.\] (31)
The remainder of this Section is organized as follows. First, as a lemma, we will prove that \(q^{*}\) as a function of \(\sigma_{w}\) is continuous at \(\sigma_{w}=\sigma_{w;c}\). This also serves as a demonstration of the strategy we employ throughout the proof. Next we will prove the second proposition in the above, assuming that the first one is correct. Finally the first proposition is proved.
Now let us prove the continuity of \(q^{*}\) as a function of \(\sigma_{w}\). As we stated in the main text, we expand the mean-field theory [12] with respect to infinitesimally small deviation \(\delta\sigma_{w}\) from the critical point \(\sigma_{w;c}\). Consider the fixed point \(q^{*}\) of the mean-field theory (24) for infinitesimally different \(\sigma_{w}\), and let \(\delta\sigma_{w}\) and \(\delta q^{*}\) respectively denote the increment in \(\sigma_{w}\) and \(q^{*}\). Then, we would like to find \(\alpha>0\) such that
\[\delta q=\alpha\delta\sigma_{w}+O((\delta\sigma_{w})^{2}). \tag{32}\]
The following equality follows by the definition of \(q^{*}\):
\[q^{*}=\sigma_{w}^{2}\int\mathrm{d}z\frac{1}{\sqrt{2\pi}}e^{-\frac{z^{2}}{2}}h ^{2}(\sqrt{q^{*}}z)+\sigma_{b}^{2} \tag{33}\]
We expand the mean-field theory (24) to see the following (here we show the step-by-step calculations for demonstrations; after the equation below, straightforward deformations of formula are omitted in the proofs for brevity):
\[q^{*}+\delta q^{*} = (\sigma_{w}+\delta\sigma_{w})^{2}\int\mathrm{d}z\frac{1}{\sqrt{2 \pi}}e^{-\frac{z^{2}}{2}}h^{2}(\sqrt{q^{*}+\delta q^{*}}z)+\sigma_{b}^{2}\] \[= \sigma_{w}^{2}\int\mathrm{d}z\frac{1}{\sqrt{2\pi}}e^{-\frac{z^{2 }}{2}}\left[h(\sqrt{q^{*}}z)+\frac{\delta q^{*}z}{2\sqrt{q^{*}}}h^{\prime}( \sqrt{q^{*}}z)+O((\delta q^{*})^{2})\right]^{2}+\sigma_{b}^{2}\] \[+2\sigma_{w}\delta\sigma_{w}\int\mathrm{d}z\frac{1}{\sqrt{2\pi}} e^{-\frac{z^{2}}{2}}h^{2}(\sqrt{q^{*}}z)+O((\delta\sigma_{w})^{2})\] \[= \sigma_{w}^{2}\int\mathrm{d}z\frac{1}{\sqrt{2\pi}}e^{-\frac{z^{2} }{2}}h^{2}(\sqrt{q^{*}}z)+\sigma_{b}^{2}\] \[+\delta q^{*}\cdot\frac{\sigma_{w}^{2}}{\sqrt{q^{*}}}\int \mathrm{d}z\frac{z}{\sqrt{2\pi}}e^{-\frac{z^{2}}{2}}h(\sqrt{q^{*}}z)h^{\prime }(\sqrt{q^{*}}z)\] \[+2\sigma_{w}\delta\sigma_{w}\cdot\frac{q^{*}-\sigma_{b}^{2}}{ \sigma_{w}^{2}}+O((\delta q^{*})^{2})+O((\delta\sigma_{w})^{2}).\]
Figure 3: Quantitative characterization of the order-to-chaos transition in infinitely-wide FC. (a) The fixed point \(q^{*}\) of the reucurrence relation (24) as a function of \(\sigma_{w}\). (b) The fixed point \(1-c^{*}\) of the recurrence relation (25) as a function of \(\sigma_{w}\). The black solid line is guide-to-eye for linear onset, as expected from Eq. (47). (c) The reciprocal correlation depth \(\xi_{c}^{-1}\), as calculated from Eq. (29). In all the three panels, \(\sigma_{b}\) is set to be 0.3, and the vertical dashed lines indicate the position of the critical point \(\sigma_{w;c}(\sim 1.3956)\) for that \(\sigma_{b}\). The black solid lines are guide-to-eye for linear onset, as expected from Eq. (38) (left) and Eq. (40) (right).
Subtracting Eq. (33) from Eq. (34) yields
\[\delta q^{*}=\delta q^{*}\sigma_{w}^{2}\int{\rm d}z\frac{z}{\sqrt{2\pi q^{*}}}e^{- \frac{z^{2}}{2}}h(\sqrt{q^{*}}z)h^{\prime}(\sqrt{q^{*}}z)+\frac{2(q^{*}-\sigma_{ b}^{2})}{\sigma_{w}}\delta\sigma_{w}, \tag{35}\]
from which we immediately find
\[\alpha=\frac{2(q^{*}-\sigma_{b}^{2})}{\sigma_{w}\left[1-\sigma_{w}^{2}\int{ \rm d}z\frac{z}{\sqrt{2\pi q^{*}}}e^{-\frac{z^{2}}{2}}h(\sqrt{q^{*}}z)h^{\prime }(\sqrt{q^{*}}z)\right]}. \tag{36}\]
In particular at the critical point \(\sigma_{w;c}\), \(\alpha\) can be further simplified to
\[\alpha=\frac{2(q_{c}^{*}-\sigma_{b}^{2})}{-\sigma_{w}^{3}\int{\rm d}z\frac{1} {\sqrt{2\pi}}e^{-\frac{z^{2}}{2}}h(\sqrt{q_{c}^{*}}z)h^{\prime\prime}(\sqrt{q_ {c}^{*}}z)}, \tag{37}\]
where \(q_{c}^{*}\) denotes the fixed point \(q^{*}\) at the critical point. It turns out that the numerator and the denominator of the RHS of Eq. (36) converge to a finite value, and hence so does \(\alpha\) itself.
Next we study the behavior of \(\xi_{c}^{-1}\) around the critical point. To do this, we expand Eq. (29) with respect to an infinitesimal deviation \(\delta\sigma_{w}\) from the critical point \(\sigma_{w;c}\):
\[\begin{array}{rcl}e^{-\frac{1}{\xi_{c}(\sigma_{w;c}-\delta\sigma_{w})}}&=&( \sigma_{w;c}-\delta\sigma_{w})^{2}\int{\rm d}z\frac{1}{\sqrt{2\pi}}e^{-\frac{z ^{2}}{2}}h^{\prime 2}(\sqrt{q_{c}^{*}-\alpha\delta\sigma_{w}}z)\\ &\sim& 1-\left[\frac{2}{\sigma_{w;c}}+\frac{\alpha\sigma_{w;c}^{2}}{ \sqrt{q_{c}^{*}}}\int{\rm d}z\frac{z}{\sqrt{2\pi}}e^{-\frac{z^{2}}{2}}h^{ \prime}(\sqrt{q_{c}^{*}}z)h^{\prime\prime}(\sqrt{q_{c}^{*}}z)\right]\delta \sigma_{w}.\end{array} \tag{38}\]
The coefficient for \(\delta\sigma_{w}\) in the RHS remains finite for the given activation function (namely \(\tanh\)), and hence one can see that \(\xi_{c}^{-1}\) decreases to 0 as \(\sigma_{w}\uparrow\sigma_{w;c}\) in an asymptotically linear manner, in particular
\[\iota_{1}=\frac{2}{\sigma_{w;c}}+\frac{\alpha\sigma_{w;c}^{2}}{\sqrt{q_{c}^{ *}}}\int{\rm d}z\frac{z}{\sqrt{2\pi}}e^{-\frac{z^{2}}{2}}h^{\prime}(\sqrt{q_{c }^{*}}z)h^{\prime\prime}(\sqrt{q_{c}^{*}}z). \tag{39}\]
Similarly one finds
\[\begin{array}{rcl}e^{-\frac{1}{\xi_{c}(\sigma_{w;c}+\delta\sigma_{w})}}&=&( \sigma_{w;c}+\delta\sigma_{w})^{2}\int{\rm d}z_{2}\int{\rm d}z_{1}\frac{1}{ \sqrt{(2\pi)^{2}}}e^{-\frac{z^{2}+z_{2}}{2}}h^{\prime}(u_{1}^{*}+\delta u_{1} ^{*})h^{\prime}(u_{2}^{*}+\delta u_{2}^{*})\\ &\sim& 1-\left[\zeta\cdot\sigma_{w;c}^{2}q_{c}^{*}\int{\rm d}z\frac{1}{ \sqrt{2\pi}}e^{-\frac{z^{2}}{2}}h^{\prime\prime 2}(\sqrt{q_{c}^{*}}z)-\iota_{1} \right]\delta\sigma_{w}\end{array} \tag{40}\]
at the chaotic phase (assuming Eq. (30) holds), which indicates
\[\iota_{2}=\zeta\cdot\sigma_{w;c}^{2}q_{c}^{*}\int{\rm d}z\frac{1}{\sqrt{2\pi }}e^{-\frac{z^{2}}{2}}h^{\prime\prime 2}(\sqrt{q_{c}^{*}}z)-\iota_{1}. \tag{41}\]
Note that the contribution of order \(\delta\sigma_{w}^{\frac{1}{2}}\) vanishes because
\[\int{\rm d}z\frac{z}{\sqrt{2\pi}}e^{-\frac{z^{2}}{2}}=0. \tag{42}\]
Thus it is confirmed that \(\nu_{\parallel{\rm FC}}=1\).
To see the behavior of \(c^{*}\) as a function of \(\sigma_{w}\) in the chaotic phase, we expand the so-called \(\mathcal{C}\)-map (which can be obtained by setting \(q_{1}^{(l)}=q_{2}^{(l)}=q^{*}\) in Eq. (25))
\[c^{(l+1)}=\frac{1}{q^{*}}\left[\sigma_{w}^{2}\int{\rm d}z_{1}\int{\rm d}z_{2} \frac{1}{\sqrt{(2\pi)^{2}}}e^{-\frac{z^{2}_{1}+z_{2}^{2}}{2}}h(u_{1}^{(l)})h(u_ {2}^{(l)})+\sigma_{b}^{2}\right] \tag{43}\]
slightly above the critical point (that is, \(\sigma_{w}=\sigma_{w;c}+\delta\sigma_{w}\)) around the trivial fixed point \(c^{(l)}=1\)
\[c^{(l+1)}-c^{(l)}=\left(\left.\frac{{\rm d}c^{(l+1)}}{{\rm d}c^{(l)}}\right|_{c ^{(l)}=1}-1\right)(c^{(l)}-1)+\frac{1}{2}\left.\frac{{\rm d}^{2}c^{(l+1)}}{{ \rm d}c^{(l)2}}\right|_{c^{(l)}=1}(c^{(l)}-1)^{2}+\cdots, \tag{44}\]
because the straightforward expansion of the \(\mathcal{C}\)-map (43) around \(\sigma_{w;c}\), as was done in the derivation of \(\alpha\) (see Eq. (34)), yields a trivial identity (\(0=0\)). Notice that one can inductively see
\[\frac{\mathrm{d}^{n}c^{(l+1)}}{\mathrm{d}c^{(l)n}}\bigg{|}_{c^{(l)}=1}=\sigma_{w }^{2}q^{*n-1}\int\mathrm{d}z\frac{1}{\sqrt{2\pi}}e^{-\frac{z^{2}}{2}}\left( \frac{\mathrm{d}^{n}h}{\mathrm{d}z^{n}}(\sqrt{q^{*}}z)\right)^{2}, \tag{45}\]
which implies these derivatives are positive and finite at any order. Particularly in the vicinity of the critical point, we have
\[\frac{\mathrm{d}c^{(l+1)}}{\mathrm{d}c^{(l)}}\bigg{|}_{c^{(l)}=1}-1=\iota_{1} \delta\sigma_{w}+o(\delta\sigma_{w}). \tag{46}\]
By taking the first two terms of the expansion (44) into account, one can see that the leading contribution for the nontrivial fixed point of the \(\mathcal{C}\)-map (43) is of order \(\delta\sigma_{w}\) (and hence \(\beta_{\mathrm{FC}}=1\)), in particular
\[\zeta=\frac{2\iota_{1}}{\sigma_{w}^{2}q_{c}^{*}\int\mathrm{d}z\frac{1}{\sqrt {2\pi}}e^{-\frac{z^{2}}{2}}h^{\prime 2}(\sqrt{q_{c}^{*}}z)}. \tag{47}\]
Remarkably, we find that the two coefficients \(\iota_{1},\iota_{2}\) characterizing the power-law divergence of the correlation depth \(\xi_{c}\) are identical with each other:
\[\iota_{1}=\iota_{2}(=:\iota). \tag{48}\]
To sum up, the order-to-chaos transition in untrained infinitely-wide FC with \(\tanh\) activation can be characterized by the two critical exponents \(\beta_{\mathrm{FC}}=1,\nu_{|\mathrm{FC}}=1\) and three nonuniversal parameters \(\sigma_{w;c},\iota,\zeta\). At this point, it is worthwhile to note that the parameters \(\iota,\zeta\) are directly related to the parameters \(\gamma(\tau),\kappa\) in the phenomenological description (9) in the main text; the order parameter \(\rho^{(l)}\) as a function of depth \(l\) can be described (to a reasonably good approximation, at least) by a solution of
\[\frac{\mathrm{d}\rho}{\mathrm{d}l}=\iota\cdot(\sigma_{w}-\sigma_{w;c})\rho- \frac{\iota}{\zeta}\rho^{2}, \tag{49}\]
provided that the network is close enough to the critical point and that \(l\) is sufficiently large.
|
2301.11604 | A critical look at deep neural network for dynamic system modeling | Neural network models become increasingly popular as dynamic modeling tools
in the control community. They have many appealing features including nonlinear
structures, being able to approximate any functions. While most researchers
hold optimistic attitudes towards such models, this paper questions the
capability of (deep) neural networks for the modeling of dynamic systems using
input-output data. For the identification of linear time-invariant (LTI)
dynamic systems, two representative neural network models, Long Short-Term
Memory (LSTM) and Cascade Foward Neural Network (CFNN) are compared to the
standard Prediction Error Method (PEM) of system identification. In the
comparison, four essential aspects of system identification are considered,
then several possible defects and neglected issues of neural network based
modeling are pointed out. Detailed simulation studies are performed to verify
these defects: for the LTI system, both LSTM and CFNN fail to deliver
consistent models even in noise-free cases; and they give worse results than
PEM in noisy cases. | Jinming Zhou, Yucai Zhu | 2023-01-27T09:03:05Z | http://arxiv.org/abs/2301.11604v2 | # A critical look at deep neural network for dynamic system modeling
###### Abstract
Neural network models become increasingly popular as dynamic modeling tools in the control community. They have many appealing features including nonlinear structures, being able to approximate any functions. While most researchers hold optimistic attitudes towards such models, this paper questions the capability of (deep) neural networks for the modeling of dynamic systems using input-output data. For the identification of linear time-invariant (LTI) dynamic systems, two representative neural network models, Long Short-Term Memory (LSTM) and Cascade Forward Neural Network (CFNN) are compared to the standard Prediction Error Method (PEM) of system identification. In the comparison, four essential aspects of system identification are considered, then several possible defects and neglected issues of neural network based modeling are pointed out. Detailed simulation studies are performed to verify these defects: for the LTI system, both LSTM and CFNN fail to deliver consistent models even in noise-free cases; and they give worse results than PEM in noisy cases.
keywords: Deep neural network, Model structure, Error criteria, Consistency, Model validation, System identification.
## 1 Introduction
Process modeling is fundamental to most process applications, from control, optimization to fault diagnosis and soft sensor. An accurate model that well reflects the process behavior is essential for success of all these applications. Neural network and deep learning models have become increasingly popular in the control community, inspired by their tremendous success in, e.g., Computer Vision (CV), Natural Language Processing (NLP) [1; 2]. Unlike system identification theory that starts from the well-established theory for linear systems [3; 4], neural networks have inherently nonlinear structures. Moreover, they are so-called universal approximators capable of approximating any functions within any degree of accuracy [5; 6]. Numerous papers have been published for the development of neural network based methods for various applications in systems and control [7; 8; 9].
The input and output spaces of a neural network are very general: the input could be signals, images, etc., while the output could be numerical values or classifications. Among the many possibilities, this paper only focuses on the case where the neural network is used to learn the dynamic relations between process inputs and outputs. Such model can be used, e.g., as the internal model in model predictive control (MPC), for residual generation in FDI, or as a soft sensor.
According to some state-of-the-art review papers [7; 8; 9; 10; 11], three representative network structures are Feedfoward Neural Network (FNN), Recurrent Neural Network (RNN), and Convolution Neural Network (CNN). FNN is a classical structure and its use in process industry can be dated back to 20 years ago [12; 13]. It still receives great research interests nowadays in nonlinear modeling and MPC applications [10; 14; 15; 16]. For dynamic modeling, lagged input and output signals should be fed into FNN to manually create dynamics[14], while RNN has naturally a dynamic structure which can be written in a nonlinear state-space form [1], thus it is very popular in process control research where dynamic modeling is desirable. As reported [1], most successful applications of RNN use the LSTM structure, which can successfully handle the gradient vanishing/exploding problem of original RNN. LSTM has already been extensively used in FDI [17], MPC [15], soft sensor [18]. AutoEncoder (AE) is another popular RNN structure whose applications are mainly in FDI and soft sensor. CNN is most well-known for its success in computer vision, it can also be used in local dynamic modeling and frequency domain modeling [18].
Great efforts have been put into research on neural networks and deep learning methods with the hope that they can promote developments of process control industry. However, unlike in the fields of CV and NLP, very few successful applications based on neural networks are reported in process industries. What can the user really benefits from a neural network based modeling, compared to linear system identification and simple nonlinear models? Most papers investigating neural networks tend to compare only the performance between several network structures and ignore this important question. Only recently, such comparison studies are carried out by several researchers. In [19], based on a 660MW boiler dataset, the model quality of LSTM is compared to those of simple (linear) statistical models and the results show that LSTM gives the worst performance. Based on the Silverbox dataset, in [14], LSTM gives worse result than FNN; in [20], the authors find that some network structures can act as noise amplifiers that deteriorate the model quality. The two papers reveal a common
phenomenon: a complex model structure mathematically capable of learning arbitrary systems well can fail in practice, even if the real system is not complex at all.
This paper further investigates the deficiencies of neural network based modeling. Besides LSTM that has been criticized in [19; 14], it will be shown that the CFNN suggested in [14; 10] also has problems that may hinder its use in modeling dynamic systems. Instead of considering modeling of nonlinear system as in [14; 10], this paper returns to the most fundamental problem: modeling of LTI systems. If a method cannot model such system well, it certainly cannot handle more complex nonlinear systems. It will be shown that, although a CFNN contains both linear part and nonlinear part, it fails to identify an LTI system without noise with consistency. Increasing the number of hidden units or hidden layers do not improve the situation. These findings will be discussed and explained through four essential aspects of system identification, model structure, error criterion, estimation properties and model validation. Simulation studies will also be presented to support the declarations.
The rest of the paper unfolds as follows. Section 2 gives background knowledge about linear system identification, LSTM and CFNN; Section 3 discusses and compares the three models from a system identification perspective, points out potential problems for neural network based modeling; Section 4 contains detailed simulation studies of a LTI system and a Hammerstein system; Section 5 gives conclusions.
### Notations
\(q\) denotes the forward time shifter. \(\sigma(\cdot)\) denotes activation function (vector) in neural networks, such as sigmoid function, Rectified Linear Unit (ReLU) function. Details about these functions can be found in [1]. \(*\) is the Hadamard (element-wise) product. \(\mathbb{N}^{+}\) denotes positive natural number. \(\Phi_{z}\) denotes power spectrum of signal \(\{z(t)\}\). Var\([\cdot]\) is the mathematical variance operator. 'With probability \(\alpha\)' is abbreviated as w.p. \(\alpha\). Euclidean norm and Frobenius norm are denoted as \(\|\cdot\|\) and \(\|\cdot\|_{F}\).
## 2 Background
Consider a general Single-Input Single-Output (SISO) LTI system [3]:
\[\mathcal{S}:y(t)=\underbrace{G_{0}(q)u(t)}_{:=y_{0}(t)}+\underbrace{H_{0}(q)e _{0}(t)}_{:=y_{0}(t)}, \tag{1}\]
where \(u(t)\) and \(y(t)\) denote system input and output signals, \(v(t)\) denotes the disturbance. \(G_{0}(q)\) denotes the system transfer function,
\[G_{0}(q)=\frac{B_{0}(q)}{A_{0}(q)}=\frac{b_{0}^{0}+\cdots+b_{0}^{n_{0}}q^{-n_{ b}}}{1+a_{0}^{1}q^{-1}+\cdots+a_{0}^{n_{c}}q^{-n_{u}}}. \tag{2}\]
which is assumed stable. For simplicity assume \(n_{a}=n_{b}=n_{e}\)\(e_{0}(t)\) is a zero-mean white noise sequence with variance \(\lambda_{0}^{2}\). \(H_{0}(q)\) is assumed to be stable and inversely stable and monic.
### The prediction error method
In system identification, a parametric model set is used to describe the true system \(\mathcal{S}\):
\[\mathcal{M}(\theta):y(t)=G(q,\theta)u(t)+H(q,\theta)e(t). \tag{3}\]
In PEM, quadratic cost function of the one-step Prediction Error (PE) is minimized:
\[\hat{\theta}_{N}=\operatorname*{arg\,min}_{\theta}\sum_{t=1}^{N} \left(y(t)-\hat{y}_{\text{pem}}(t;\theta)\right) \tag{4a}\] \[\hat{y}_{\text{pem}}(t;\theta)=\left(1-H^{-1}(q;\theta)\right)y( t)+H^{-1}(q;\theta)G(q;\theta)u(t). \tag{4b}\]
Notice that subsequently when talking about error criterion for parameter estimation, all PE refers to one-step PE. If \(\mathcal{S}\in\mathcal{M}\), which means that the model structure \(\mathcal{M}\) is flexible enough, there exists \(\theta_{0}\) such that \(G(q,\theta_{0})=G_{0}(q)\) and \(H(q,\theta_{0})=H_{0}(q)\). Moreover, if \(\mathcal{M}(\theta)\) is globally identifiable at \(\theta_{0}\) and the input signal is persistently exciting, the PEM estimate is _consistent_:
\[\hat{\theta}_{N}\rightarrow\theta_{0},\text{ w.p. 1 as }N\rightarrow\infty. \tag{5}\]
A more precise description of the conditions required for this property can be found in Chapter 8 of [3]. The consistency property implies that the estimate will approach the parameter vector representing the true system, when large amount of data are available. If \(e(t)\) in (1) is Guassian, the PEM estimate can be further proved to have _minimum variance_[3; 4].
### Long short-term memory network
The below contents show the mathematical formulation of a single-layer LSTM network. The cores of LSTM are four gates and two states controlling the transfer of the information flow. The four gates are defined of the form
\[*(t)=\sigma\left(W_{uu}u(t)+W_{ub}h(t-1)+b_{*}\right) \tag{6}\]
where \(*\) can be \(i\), \(f\), \(g\) and \(o\), corresponding to input, forget, cell and output gates respectively. Suppose that the weighting matrices and bias vectors above are of compact dimensions. Based on these gates, the cell state \(c(t)\) and hidden state \(h(t)\) are updated according to
\[c(t)=f(t)*c(t-1)+i(t)*g(t) \tag{7a}\] \[h(t)=o(t)*\tanh\left(c(t)\right). \tag{7b}\]
Finally, the output of LSTM is
\[\hat{y}_{\text{lstm}}(t)=W_{y,b}h(t)+b_{y}. \tag{8}\]
In above formulations, (6-7) constitute a LSTM layer, while (8) is often referred as a linear fully connected layer. Introduce also a vector \(\rho\) that contains all the parameters, i.e., \(W_{\text{.}}\) and \(b_{\text{.}}\) in (6-8), which can be optimized according to
\[\hat{\rho}_{N}=\operatorname*{arg\,min}_{\rho}\sum_{t=1}^{N}\left(y(t)-\hat{y} _{\text{lstm}}(t;\rho)\right)^{2}. \tag{9}\]
Notice that \(\hat{y}_{\text{lstm}}(t)\) is calculated according to (6-8). To obtain a deep network structure, one can simply connect several LSTM layer in series.
### Cascade forward neural network
CFNN here refers to a specific FNN suggested in [14] for nonlinear identification. It is characterized by that outputs of all previous units are used as inputs in the next layer. It is already available in Matlab's System Identification Toolbox, as an idnlarx object. As has been mentioned in Section 1, lagged input and output signals should be used to introduce dynamics. The input of CFNN is the regressor
\[\varphi^{\top}(t)=\left[y(t-1),\cdots,y(t-n),u(t),\cdots,u(t-n)\right]. \tag{10}\]
The output of a single layer CFNN is
\[\hat{y}_{\text{cfnn}}(t)=\underbrace{W_{yu}\varphi(t)+b_{y}}_{\text{Linear}}+ \underbrace{W_{yu}\sigma\left(W_{hu}\varphi(t)+b_{h}\right)}_{\text{Nonlinear}}. \tag{11}\]
Collect all parameters of CFNN into a vector \(\vartheta\), it can be optimized according to
\[\hat{\vartheta}_{N}=\operatorname*{arg\,min}_{\vartheta}\sum_{t=1}^{N}\left(y (t)-\hat{y}_{\text{cfnn}}(t;\vartheta)\right)^{2}. \tag{12}\]
As shown in (11), CFNN can be divided into linear and nonlinear parts where the nonlinear part is due to the hidden layer. Similar to LSTM, one can use multiple hidden layers to achieve a deep structure.
## 3 Neural network based dynamic modeling from an identification perspective
Shown in Fig. 1 is a typical identification procedure [3; 4; 21] covering essential steps and key points of a general black-box dynamic modeling. This section discusses the three models introduced in Section 2 then points out problems that are often neglected. The experiment design and parameter optimization problems will not be discussed, only the steps with black dotted line in Fig. 1 are focused.
### Model structure
Among the three methods listed in Section 2, the structure of PEM is exactly the same as the true system \(\mathcal{S}\). By introducing a new state vector \(x^{\top}(t)=[c^{\top}(t),h^{\top}(t)]\), LSTM model (6-8) can be summarized to a NonLinear State-Space (NLSS) form [1]:
\[x(t)=\mathcal{F}\left(x(t-1),u(t);\rho\right) \tag{13a}\] \[\hat{y}_{\text{ksm}}(t;\rho)=\mathcal{H}\left(x(t);\rho\right). \tag{13b}\]
Notice that it is assumed that no time delay exists in input and output in (13). According to [14; 10], CFNN belongs to the Nonlinear ARX (NARX) model:
\[\hat{y}_{\text{cfnn}}(t;\vartheta)=\mathcal{G}\left(\varphi(t);\vartheta \right). \tag{14}\]
In (13-14) \(\mathcal{F}\), \(\mathcal{H}\), \(\mathcal{G}\) are user-defined (nonlinear or linear) functions.
As has been mentioned in Section 2, CFNN model contains a linear part. When \(W_{yu}\), \(W_{hu}\), \(b_{y}\) and \(b_{h}\) all vanish and if
\[W_{yu}=[-a_{0}^{1},\cdots,-a_{0}^{n_{h}},b_{0}^{1},\cdots,b_{0}^{n_{h}}], \tag{15}\]
CFNN will present exactly the same behavior as \(G_{0}(q)\). That is to say, CFNN covers the LTI model structure as its special case and is theoretically capable of modeling such system, which resembles Volterra model [22], block-oriented model [23], linear parameter-varying model [24], etc. In contrast, it is non-trivial how LSTM network can be connected or reduced to a common LTI model. In [19], after a careful design of the hyperparamters LSTM still gives worst performance among other statistical models. The authors attribute this phenomenon to the effects of gate functions in (6), disabling some of these gates can improve the performance of LSTM. Simulation results in [14] are similar.
The above discussions and the results in [14; 19] reveal that, for process dynamic modeling, models that cannot be interpreted from process dynamics are undesirable even if they have complex structures with strong approximation abilities.
### Error criteria
Error criteria are the basics for optimization of model parameters, they are application-oriented and are closely related to the properties of the estimated model. Three commonly used error criteria of the linear model set (\(\mathcal{M}\)) are, Prediction Error (PE), Output Error (OE, also called simulation error), Equation Error (EE):
\[\varepsilon_{\text{pe}}(t;\theta) =H^{-1}(q;\theta)\left(y(t)-G(q;\theta)u(t)\right) \tag{16a}\] \[\varepsilon_{\text{oe}}(t;\theta) =y(t)-G(q;\theta)u(t)\] (16b) \[\varepsilon_{\text{oe}}(t;\theta) =A(q;\theta)y(t)-B(q;\theta)u(t) \tag{16c}\]
where \(A(q)\) and \(B(q)\) are defined similarly to \(A_{0}(q)\) and \(B_{0}(q)\) in (2). Notice that EE equals PE of an ARX model [3] hence only PE and OE will be considered. The difference between the predicted output (4b) and the simulated output \(G(q;\theta)u(t)\) is that predicted output uses measured output signals while simulated
Figure 1: Typical identification procedure.
output only uses input signals. Bearing this point in mind, the extensions in LSTM and CFNN are straightforward: because no measured outputs are used in \(\hat{y}_{\text{lstm}}\), the LSTM criterion (9) is an output error criterion; the CFNN criterion (12) is a prediction error (equation error) criterion.
While the PEM model set \(\mathcal{M}\) has a noise model that can be parameterized independently to process model (e.g., Box-Jenkins (BJ) model [3]), LSTM and CFNN are not equipped with such ability to handle disturbance. In [20], the authors argue that the noise effects can even be amplified in CFNN criterion (12) and suggests the use of output error criterion. However, how to optimize CFNN model under an output error criterion is an unsolved problem.
**Remark 1**: _Sometimes the input of LSTM is chosen to be a regressor containing measured outputs like \(\varphi(t)\) in (10), which is called teacher forcing in Chapter 10.2.1 in [1]. In this case, the LSTM criterion (9) becomes a prediction error criterion._
### Estimation properties
Analyzing the (statistical) properties of the estimated model is of particular interest and of vital importance in all modeling technique. The total model error can be divided into structure error caused by deficiencies in model structure and random model error caused by stochastic disturbance. For linear identification, one often investigates the consistencies and variances of parameters, step responses or frequency responses. The two points concerning neural network will be respectively discussed below.
#### Consistency
For nonlinear model structures, it makes less sense to consider in the parameter space. Instead, the _consistency in step response_ can be defined analogously to (5):
**Definition 1**: _Denote \(\{\mathcal{U}(t)\}\) as a step signal of amplitude \(A_{\mathcal{U}}\). Then for some general system \(\mathcal{S}\) and an estimated model \(\hat{\mathcal{M}}_{N}\) under structure \(\mathcal{M}\) and with \(N\) data samples, as well as their outputs subject to \(\{\mathcal{U}(t)\}\): \(\{\mathcal{Y}_{0}(t)\}\) and \(\{\hat{\mathcal{Y}}_{N}(t)\}\), if_
\[\hat{\mathcal{Y}}_{N}(t)\rightarrow\mathcal{Y}_{0}(t),\text{ w.p. 1 as }N\rightarrow\infty,\ \forall t\in\mathbb{N}^{+},A_{\mathcal{U}}\neq 0, \tag{17}\]
_the estimate of \(\mathcal{M}\) is consistent in step response to \(\mathcal{S}\)._
In this regard, the estimated model \(\hat{\mathcal{M}}_{N}\) can accurately describe the input-output relations of the true system \(\mathcal{S}\) if sufficient data samples are collected. For LTI system, PEM can deliver consistent estimate in parameters, step response and frequency response (under some conditions, see Section 2.1), and for step response one only needs to consider the case \(A_{\mathcal{U}}=1\).
It is necessary to give some discussions on the differences between the above consistency concept and the well-known _universal approximation theorem_[1; 5; 6]. In Chapter 6.4.1 of [1], this theorem is summarized as:
_An FNN with a linear output layer and at least one hidden layer with the commonly used activation function (sigmoid, ReLU, etc.) can approximate any Borel measurable function from one-dimensional space to another with any desired amount of error._
The theorem states that even a single-layer FNN (which is nearly the simplest network) is theoretically capable of approximating any functions arbitrarily well, which seems to imply that neural network is a powerful modeling tool. But note that this is only an existence theorem that gives neither the guarantee of consistent estimates, nor the guideline for how to obtain a consistent estimate using training data. This theorem may have made control community overly optimistic about the modeling capability of neural networks for dynamic systems.
#### Variance
Consider first the asymptotic variance expression of the frequency response for linear system [25]:
\[\text{Var}\left[\hat{G}_{N}^{n}(\text{e}^{\text{i}\omega})\right]\approx\frac {n}{N}\frac{\Phi_{v}(\omega)\lambda_{0}}{\Phi_{u}(\omega)\lambda_{0}-|\Phi_{u \omega_{0}}(\omega)|} \tag{18}\]
where \(\hat{G}_{N}^{n}(\text{e}^{\text{i}\omega})\) denotes frequency response function of a \(n\)th-order estimate based on \(N\) data samples. The expression holds exactly for ARX model when \(N\rightarrow\infty\), \(n\rightarrow\infty\). It reveals that the model variance is proportional to number of parameters and is inversely proportional to sample size. Although there is no such conclusion available for nonlinear systems and neural network models, it is generally acknowledged that complex models containing many parameters are more vulnerable to stochastic disturbances.
Consider modeling a \(n\)th order SISO LTI system like \(\mathcal{S}\), suppose that the correct order is used for PEM (a BJ model is used) and CFNN, all biases in LSTM and CFNN are set to zero, and use \(n_{h}\) hidden units for both LSTM and CFNN. Then the parameter numbers of the three models are
BJ: \[n_{\theta} =4n\] (19a) LSTM: \[n_{\rho} =4\left(n_{h}^{2}+n_{h}\right)+n_{h}\] (19b) CFNN: \[n_{\theta} =n_{h}\left(2n+1\right)+2n\] (19c)
Notice that only single-layer LSTM and CFNN with fully connected output layers are considered.
Fig. 2 shows the parameter numbers of different models with varying system orders and hidden unit settings. Among three model structures, LSTM has the most parameters and is 1-4 orders of magnitude higher than BJ model; CFNN ranges in the
Figure 2: Parameter numbers of different models under different system orders.
middle. For dynamic process modeling, the training data must be informative enough to contain the main process behavior, which can be guaranteed by adding persistently exciting test signals to the plant [21]. This procedure certainly introduces some slight disturbance to normal operations and is the main cost of black-box process modeling. Heuristically, according to (18), to achieve a same level variance, LSTM requires 1-4 orders of magnitude more data than BJ model, which means a huge increase in modeling cost. This gives another explanation about the poor model quality in [14; 19] apart from the model structure issue.
### Model validation
Model validation is the final step before a model can be put into use. It should be application-oriented. In MPC, the model is used to give multi-step predictions. If the step size of a multi-step prediction tends to infinity, it becomes simulated output, see Chapter 3.2 of [3] for details. Recently, it is proved that output error serves as a useful tool for FDI as well [26]. Hence it is important to check simulation error because it reflects the real gap between the model and the plant. However in many recently published papers, the model quality is only validated through (one-step) prediction error. In [27], the author illustrated that two model with close prediction errors can have huge differences in simulation errors.
Typically, a neural network model with complex structure can easily deliver a loss function of very small value. When the loss corresponds to prediction error and the model gives accurate one-step prediction, one can tell nothing about the model quality from the input to the output. If possible, check simulation error; or at least test the multi-step prediction error for validation.
### Summary
LSTM has a very different structure from a LTI system hence it may have difficulties to model such a system. In contrast, CFNN has a structure which can be interpreted as a linear model extension. However, as discussed above, there is no guarantee for its model consistency and it may have high variance. Further, a small prediction error delivered by CFNN may not imply a good input-output model.
## 4 Simulation study
While LSTM has been comprehensively tested in [14; 19], this section studies CFNN in a linear ARX system and a Hammerstein system. Step responses under different input amplitudes will be calculated and normalized according to the input amplitudes for comparison. Mean values are removed in the training data so the bias vectors in CFNN will be disabled.
Simulations performed in this section are all performed using Matlab and System Identification Toolbox. Specifically, nlarx is used to estimate CFNN; BJ is used to estimate a BJ model; nlhw is used to estimate a Hammerstein system. For above estimates obtained in System Identification Toolbox, sim and predict are used to calculate model simulation and \(k\)-step prediction.
### Linear system
Consider the following ARX system:
\[\begin{split} y(t)=\frac{B_{0}(q)}{A_{0}(q)}u(t)+\alpha\frac{1}{ A_{0}(q)}e_{0}(t)\\ B_{0}(q)=0.0115+0.00639q^{-1}\\ A_{0}(q)=1-1.963q^{-1}+0.965q^{-2}\end{split} \tag{20}\]
where \(\alpha\) is used to control the Noise-to-Signal Ratio (NSR), defined as \(\mathrm{Var}\left[v_{0}^{2}(t)\right]/\mathrm{Var}\left[v_{0}^{2}(t)\right]\), see (1) for definitions of \(y_{0}(t)\) and \(v_{0}(t)\).
#### 4.1.1 Noise-free system
The system without noise is considered. The training data are generated with a Generalized Binary Noise (GBN) [28] as input. Its average switching time is 20 samples and amplitude is 1. The sample size \(N=50000\), the result below does not change if \(N\) is further increased. Subsequently, a GBN signal with amplitude \(A\) and average switching time \(T\) will be abbreviated as \(A*\mathrm{GBN}(T)\) for simplicity. The input in the validation data have 5 different types: \(0.2*\mathrm{GBN}(20)\), \(1*\mathrm{GBN}(10)\), \(1*\mathrm{GBN}(20)\), \(1*\mathrm{GBN}(40)\), \(5*\mathrm{GBN}(20)\). Each input lasts for 1000 samples. Among these inputs, the first and the fifth have different amplitudes compared to the one in training data but have the same spectral distribution; the second and the fourth have different spectral distributions but have the same amplitude; the third is entirely the same. A CFNN with one layer and 4 hidden units is trained. All activation functions are ReLUs, which are suggested to use in [10; 14]. The model validation based on 1-step prediction error and output error are shown in
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Residual & 0.2*GBN(20) & 1*GNN(10) & 1*GNN(20) & 1*GNN(40) & 5*GNN(20) \\ \hline
1-step PE & 9.99E-07 & 8.38E-08 & 2.80E-08 & 1.61E-08 & 1.14E-09 \\
20-step OE & 1.81E-02 & 1.12E-03 & 1.41E-04 & 3.05E-04 & 2.17E-04 \\ Simulation & 5.73E+00 & 1.20E-02 & 3.56E-03 & 3.56E-03 & 5.12E-03 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Noise-free LTI system: REs(%) of residuals in different data sections.
Figure 3: Model validation based on one-step prediction and simulation.
to the true one are recorded in the second raw of Table 2. The FIT between two sequence \(Z_{0}\) and \(\hat{Z}\) is
\[\text{FIT}=1-\frac{\left\|Z_{0}-\hat{Z}\right\|}{\left\|Z_{0}-\text{mean}(\hat{Z })\right\|}. \tag{21}\]
The best result is achieved when input amplitude equals 1. This is because GBN is a binary signal that only have two values. For amplitudes different from the one in training data, the step responses of CFNN deviate from the true one.
Three other types of inputs are also tested: 2*GBN(20), zero-mean Gaussian white input with variance \(0.33^{2}\), uniform input that ranges in \((-1,1)\). The results are shown in Fig. 4(c-d) and Table 2. It is interesting to note that for 2*GBN(20) the best FIT moves to amplitude 2; for Gaussian input the best FIT occurs for the smallest amplitude 0.2 because Gaussian distribution has a bell-shaped curve; for uniform input the gaps between the best and the other are insignificant because the curve of uniform distribution is flat in its range.
For each case, the MSE of training data have been optimized to very small value (\(<10^{-6}\)) and obtained models differ, which implies that CFNN that can give PE nearly to zero is non-unique. The final estimated CFNN is strongly dependent on the input characteristics, the initial conditions, etc. When encountering a 'never-met' (does not occur in training data) or 'unfamiliar' (does not occur frequently in training data) input, CFNN gives wrong step response.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline Input type & 0.2 & 0.5 & 1 & 2 & 5 \\ \hline \(1^{+}\)GBN(20) & 50.59 & 85.95 & 99.79 & 93.23 & 89.09 \\ \(2^{*}\)GBN(20) & 19.67 & 60.94 & 85.95 & 99.79 & 91.85 \\ Gaussian & 83.81 & 75.12 & 72.26 & 70.85 & 70.01 \\ Uniform & 80.31 & 94.96 & 87.66 & 83.87 & 81.60 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Noise-free LTI system: FITs(%) of different step responses. The numbers in the first raw are input amplitudes. The red one denotes the best for each input.
Figure 4: Noise-free LTI system: step responses under different inputs.
Figure 5: Noisy LTI system: comparison of step responses of BJ and different CFNN structures when NSR=5%. For CFNN, the best step response among input amplitude \([0.2,0.5,1,2,5]\) in each run is plotted. The edge of the red region is the envelope of 100 runs.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline \multirow{2}{*}{
\begin{tabular}{l} NSR (\%) \\ \end{tabular} } & \multicolumn{5}{c}{CFNN (with different input amplitudes)} \\ \cline{2-7} & 0.2 & 0.5 & 1 & 2 & 5 & Best \\ \hline
0 & 100.00 & 63.43 & 89.35 & 99.23 & 94.42 & 91.22 & 99.26 \\
5 & 95.67 & 40.84 & 77.30 & 90.48 & 85.16 & 80.21 & 93.50 \\
10 & 93.64 & 39.54 & 73.85 & 88.67 & 83.48 & 78.21 & 92.50 \\
20 & 92.42 & 28.68 & 66.80 & 83.30 & 76.84 & 70.22 & 89.08 \\
40 & 86.61 & 27.35 & 62.78 & 76.50 & 68.80 & 61.88 & 84.62 \\ \hline \end{tabular}
\end{table}
Table 3: Noisy LTI system: comparison of BJ and CFNN in mean FITs (%) of step responses. CFNN has one layer with 4 hidden units.
Figure 3: The Relative Errors (RE) of one-step prediction, simulation and additionally 20-step prediction are shown in Table 1. RE is calculated according to \(\text{Var}\left[\varepsilon^{2}(t)\right]/\text{Var}\left[y^{2}(t)\right]\) where \(\varepsilon\) is some residual and \(y\) the measured output.
#### 4.1.2 Noisy system
In this part, the performance of CFNN will be compared to PEM for noisy system. The true system has an ARX structure, the parameter optimization of the same model has a closed-form solution. However, a BJ model that can also give consistent estimate is used. In this case, both CFNN and BJ require numerical optimization. For each NSR setting, 100 Monte Carlo simulations are run. In each simulation, 10000 training data are generated under the input is 1*GBN(20), then the step response of CFNN is calculated under different input amplitudes. Among these responses the one that gives the best FIT will be recorded.
A single-layer CFNN with 4 hidden units is first tested, the results are shown in Table 3. CFNN delivers worse results than BJ for all NSRs, even for the best ones chosen from the five candidates. Consistent with the discussions in Section 4.1.1, the mean FITs under input amplitude 1 give best result. Additionally, Table 4 presents information about the mean estimated parameters of CFNN. Recall that \(W_{hu}\) and \(W_{yh}\) corresponds to the nonlinear part and \(W_{wu}\) corresponds to the linear part. Concerning the linear part, \(W_{yu}(1:2)\) related to \(A_{0}(q)\) is consistent while \(W_{yu}(3:4)\) related to \(B_{0}(q)\) is not, \(\|W_{yu}(3:4)\|\) decreases as NSR decreases; in the nonlinear part, \(\|W_{uk}\|_{F}\) and \(\|W_{yh}\|\) increases as NSR increases. This reveals that there is a competition between the weighting matrices of nonlinear and linear parts. When noise level increase, the nonlinear part takes the advantage gradually and make the results deviate further away from the true system. Notice that the consistency of the parameters related to \(A_{0}(q)\) does not hold generally. When other types of inputs are used, such as Gaussian, uniform, all parameters become inconsistent.
Different structure settings of CFNN are tested and the mean FITs are shown in Table 5. One can see that increasing numbers of hidden units or layers do not improve the situation. In fact, the poorest result is obtained when three layers are used. The step responses of 100 Monte Carlo runs of three selected cases are plotted in Fig. 5. The more complex the structure of CFNN, the higher the variance; the best setting also has higher variance than BJ.
#### 4.1.3 Summary and discussion
The simulation results give surprise finding: as a universal approximator, CFNN cannot even give consistent estimate for a simple LTI system that is contained in its model structure. Although when all weighting matrices in its nonlinear part vanish, CFNN is simply a linear ARX model, there is no guarantee that estimated model is consistent. The 'flexible' structure of CFNN, enabled by the nonlinear part, i.e., the hidden layers, becomes a nuisance factor for identification of LTI system. For a nonlinear system that is more complex than a LTI system, such neural network based models can perform poorer.
### Hammerstein system
Consider the following Hammerstein system:
\[\begin{split}& y(t)=\frac{B_{0}(q)}{A_{0}(q)}f\left(u(t) \right)+\alpha\frac{1}{A_{0}(q)}e_{0}(t)\\ & f(u)=10u^{3}+3.5u^{2}+u.\end{split} \tag{22}\]
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Model & -2.5 & -1.5 & -0.5 & 0.5 & 1.5 & 2.5 \\ \hline Polynomial Hammerstein & 94.61 & 94.41 & 52.81 & 77.83 & 96.12 & 95.55 \\ CFNN (\(m=3,m=3\)) & -1.13 & 88.79 & 27.42 & 45.03 & -3.52 & 14.50 \\ CFNN (\(m=4,m=4\)) & -0.39 & 87.01 & 27.13 & 56.05 & -3.54 & 32.76 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Noisy Hammerstein system: mean FITs(%) of step responses. The numbers in the first raw are input amplitudes.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline NSR (\%) & \(\|W_{m}\|_{F}\) & \(\|W_{yu}\|\) & \(\|W_{yu}(3:4)\|\) & \(W_{yu}(1)\) & \(W_{yu}(2)\) & \(W_{yu}(3)\) & \(W_{yu}(4)\) \\ \hline
0 & 2.43 & 7.33E-05 & 9.48E-04 & 1.9630 & -0.9650 & 8.28E-04 & 4.62E-04 \\
5 & 2.45 & 3.60E-04 & 8.77E-04 & 1.9630 & -0.9650 & 7.62E-04 & 4.33E-04 \\
10 & 2.46 & 6.16E-04 & 8.02E-04 & 1.9627 & -0.9648 & 6.94E-04 & 4.04E-04 \\
20 & 2.49 & 8.92E-04 & 7.59E-04 & 1.9631 & -0.9650 & 6.56E-04 & 3.81E-04 \\
40 & 2.57 & 1.51E-03 & 7.32E-04 & 1.9627 & -0.9647 & 6.99E-04 & 2.20E-04 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Noisy system: information about the mean estimated parameters of CFNN.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \(n_{l}\) & \(n_{b}\) & 0.2 & 0.5 & 1 & 2 & 5 & Best \\ \hline
1 & 1 & 41.80 & 76.04 & 90.26 & 85.40 & 80.68 & 93.81 \\
1 & 2 & 47.35 & 77.84 & 88.85 & 84.24 & 79.35 & 92.31 \\
1 & 4 & 40.84 & 77.30 & 90.48 & 85.16 & 80.21 & 93.50 \\
1 & 6 & 50.04 & 78.37 & 89.31 & 86.95 & 83.36 & 92.80 \\
1 & 10 & 40.84 & 70.97 & 81.03 & 78.90 & 75.68 & 83.47 \\
1 & 4 & 41.80 & 76.04 & 90.26 & 85.40 & 80.68 & 93.81 \\
2 & 4 & 8.31 & 21.12 & 81.98 & -3.09 & -36.67 & 82.24 \\
3 & 4 & -12.34 & -4.88 & 43.76 & -223.69 & -282.30 & 63.32 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Noisy LTI system: mean FITs of different CFNN structures. \(n_{l}\) denotes the layer number, \(n_{h}\) denotes hidden units in each layer. The red one denotes the best settings.
Figure 6: Noise-free Hammerstein system: comparison of step responses under different input amplitudes. CFNN with 3 layers and 3 hidden units in each layer is used.
The linear part of this system is chosen entirely the same as (20). 10000 training data are generated using a Generalized Multiple-level Noise (GMN) signal (see Chapter 9.1.2 of [21]) with average switching time 10s and amplitude ranging in \([-2,2]\). When using idnlhw function the orders of system and polynomial are set to their correct values. Slightly different from the LTI case, in the first layers the activation functions are ReLU while in other layers they are hyperbolic tangent functions.
Fig. 6 shows the result of noise-free case. The step responses delivered by the polynomial Hammerstein model coincides with the true system. CFNN only gives consistent results for amplitude -1.5 and 1.5; for other amplitudes, the results are very poor. The noisy system with 1% is also tested, shown in Table 6. The polynomial Hammerstein has better FITs than CFNN for all cases tested. The variances of step responses are similar to Fig. 5 in which CFNN delivers large variances, and it will not be shown here for brevity.
## 5 Concluding remarks
Many researchers in control community are optimistic about the use of neural networks for dynamic system modeling, perhaps due to their success in CV and NLP. In this work, three representative models PEM, CFNN and LSTM are compared for their ability in LTI system identification. As reported, LSTM is unsuitable for dynamic system identification. CFNN has a reasonable structure and can be reduced to a common LTI model. However, no results exist to guarantee the model consistency. Moreover, the large number of model parameters of LSTM and CFNN will result in large model variance. In simulation studies of the LTI system, CFNN fails to give consistent step responses even in the noise-free case. In the noisy case, CFNN models have larger model variances than the BJ model. When tested in a Hammerstein system, CFNN gives poorer performance. Increasing hidden unit number and hidden layer number do not improve model quality.
This study reveals that there is still a long way to go for neural network based dynamic system identification/modeling. Following remarks can be made based on the findings:
1. The success of neural network models in CV and NLP does not guarantee its success in dynamic system modeling and control;
2. in the noise-free case, in numerical optimization for parameter estimation a neural network model may not converge to a dynamic model that is contained in its model structure when the loss function tends to zero;
3. the theorem of universal approximator cannot guarantee model consistency;
4. if a neural network model is unsuitable for modeling an LTI system, it will have more difficulties to model a nonlinear dynamic system;
5. the performance of neural network based dynamic system modeling should be compared to that of traditional linear and simple nonlinear system identification.
|
2303.00170 | Asymmetric Learning for Graph Neural Network based Link Prediction | Link prediction is a fundamental problem in many graph based applications,
such as protein-protein interaction prediction. Graph neural network (GNN) has
recently been widely used for link prediction. However, existing GNN based link
prediction (GNN-LP) methods suffer from scalability problem during training for
large-scale graphs, which has received little attention by researchers. In this
paper, we first give computation complexity analysis of existing GNN-LP
methods, which reveals that the scalability problem stems from their symmetric
learning strategy adopting the same class of GNN models to learn representation
for both head and tail nodes. Then we propose a novel method, called asymmetric
learning (AML), for GNN-LP. The main idea of AML is to adopt a GNN model for
learning head node representation while using a multi-layer perceptron (MLP)
model for learning tail node representation. Furthermore, AML proposes a
row-wise sampling strategy to generate mini-batch for training, which is a
necessary component to make the asymmetric learning strategy work for training
speedup. To the best of our knowledge, AML is the first GNN-LP method adopting
an asymmetric learning strategy for node representation learning. Experiments
on three real large-scale datasets show that AML is 1.7X~7.3X faster in
training than baselines with a symmetric learning strategy, while having almost
no accuracy loss. | Kai-Lang Yao, Wu-Jun Li | 2023-03-01T01:48:20Z | http://arxiv.org/abs/2303.00170v1 | # Asymmetric Learning for
###### Abstract
Link prediction is a fundamental problem in many graph based applications, such as protein-protein interaction prediction. Graph neural network (GNN) has recently been widely used for link prediction. However, existing GNN based link prediction (GNN-LP) methods suffer from scalability problem during training for large-scale graphs, which has received little attention by researchers. In this paper, we first give computation complexity analysis of existing GNN-LP methods, which reveals that the scalability problem stems from their symmetric learning strategy adopting the same class of GNN models to learn representation for both head and tail nodes. Then we propose a novel method, called asymmetric learning (AML), for GNN-LP. The main idea of AML is to adopt a GNN model for learning head node representation while using a multi-layer perceptron (MLP) model for learning tail node representation. Furthermore, AML proposes a row-wise sampling strategy to generate mini-batch for training, which is a necessary component to make the asymmetric learning strategy work for training speedup. To the best of our knowledge, AML is the first GNN-LP method adopting an asymmetric learning strategy for node representation learning. Experiments on three real large-scale datasets show that AML is 1.7\(\times\)\(\sim\)7.3\(\times\) faster in training than baselines with a symmetric learning strategy, while having almost no accuracy loss.
## 1 Introduction
Link prediction [10], a fundamental problem in many graph based applications, aims to predict the existence of a link that has not been observed. Link prediction problem widely exists in real applications, like drug response prediction [14], protein-protein interaction prediction [11], friendship prediction in social networks [1], knowledge graph completion [15, 16, 17] and product recommendation in recommender systems [18]. Its increased importance in real applications also promotes a great interest in research for link prediction algorithms in the machine learning community.
Link prediction algorithms have been studied for a long time [19, 18, 17], while learning based algorithms are one dominant class in the past decades. The main idea of learning based algorithms is to learn a deterministic model [13, 14, 15] or a probabilistic model [16, 18, 17] to fit the observed data. In most learning based algorithms, models learn or generate a representation for each node [16], which is used to generate a score or probability of link existence. Traditional learning based algorithms typically do not adopt graph neural network (GNN) for node representation learning. Although these non-GNN based learning algorithms have achieved much progress in many applications, they are less expressive than GNN in node representation learning.
Recently, graph neural network based link prediction (GNN-LP) methods have been proposed and become one of the most popular algorithms due to their superior performance in accuracy. The key to the success of GNN-LP methods is that they learn node representation from graph structure
and node features in a unified way with GNN, which is a major difference between them and traditional non-GNN based learning algorithms. Existing GNN-LP methods mainly include local methods [21, 17, 23, 24] and global methods [14, 15, 16, 17, 25]. Local methods apply GNN to subgraphs that capture local structural information. Specifically, they first extract an enclosed \(k\)-hop subgraph for each link and then use various labeling tricks [23] to capture the relative positions of nodes in the subgraph. After that, they learn node representation by applying a GNN model to the labeled subgraphs, and then they extract subgraph representation with a readout function [18] for prediction. Global methods learn node representation by directly applying a GNN model to the global graph and then make prediction based on head and tail node representation. Although there exists difference in local methods and global methods, all existing GNN-LP methods have a common characteristic that they adopt a symmetric learning strategy for node representation learning. In particular, they adopt the same class of GNN models to learn representation for both head and tail nodes. An illustration of existing representative GNN-LP methods is presented in Figure 1. Although existing GNN-LP methods have made much progress in learning expressive models, they suffer from scalability problem during training for large-scale graphs, which has attracted little attention by researchers.
In this paper, we propose a novel method, called asymmetric learning (AML), for GNN-LP. The contributions of this paper are listed as follows:
* We give computation complexity analysis of existing GNN-LP methods, which reveals that the scalability problem stems from their symmetric learning strategy adopting the same class of GNN models to learn representation for both head and tail nodes.
* AML is the first GNN-LP method adopting an asymmetric learning strategy for node representation learning.
* AML proposes a row-wise sampling strategy to generate mini-batch for training, which is a necessary component to make the asymmetric learning strategy work for training speedup.
* Experiments on three real large-scale datasets show that AML is 1.7\(\times\)\(\sim\)7.3\(\times\) faster in training than baselines with a symmetric learning strategy, while having almost no accuracy loss.
## 2 Preliminary
In this section, we introduce notations and some related works for link prediction.
Figure 1: An illustration of existing representative GNN-LP methods. A circle denotes a node. \(\mathcal{G}\) denotes the input graph. \(\mathcal{G}_{1}\) denotes a subgraph extracted from \(\mathcal{G}\). The dashed line between the red and blue circles denotes the target link we want to predict. \(\mathbf{X}\) denotes the node feature matrix. The red and blue rectangles denote the representation of the red and blue circles, respectively. The gray rectangle denotes the subgraph representation of \(\mathcal{G}_{1}\) generated by applying a readout function on representation of the nodes within the subgraph.
Notations.We use a boldface uppercase letter, such as \(\mathbf{B}\), to denote a matrix. We use a boldface lowercase letter, such as \(\mathbf{b}\), to denote a vector. \(\mathbf{B}_{i*}\) and \(\mathbf{B}_{*j}\) denote the \(i\)th row and the \(j\)th column of \(\mathbf{B}\), respectively. \(\mathbf{X}\mathbf{\in}\mathbb{R}^{N\times u}\) denotes the node feature matrix, where \(u\) is the feature dimension and \(N\) is the number of nodes. \(\mathbf{A}\mathbf{\in}\{0,1\}^{N\times N}\) denotes the adjacency matrix of a graph \(\mathcal{G}\). \(A_{ij}\)=1 iff there is an edge from node \(i\) to node \(j\), otherwise \(A_{ij}\)=0. \(L\) denotes the number of layers for GNN models. \(\mathcal{E}\) denotes the set of links for training. For a link \((i,j)\), we call node \(i\) a _head node_ and call node \(j\) a _tail node_.
Graph Neural Network.GNN [14, 15, 16, 17, 18, 19] is a class of models for learning over graph data. In GNN, nodes can iteratively encode their first-order and high-order neighbor information in the graph through message passing between neighbor nodes [15]. Due to the iteratively dependent nature, the computation complexity for a node exponentially increases with iterations. Although some works [15, 16, 16, 17] propose solutions for the above problem of exponential complexity, the computation complexity of GNN is still much higher than that of a multi-layer perceptron (MLP).
Graph Neural Network based Link Prediction.Benefited from the powerful ability of GNN in modeling graph data, GNN-LP methods are more expressive than traditional non-GNN based learning algorithms in node representation learning. GNN-LP methods include two major classes, local methods and global methods. For local methods, different methods vary in the labeling tricks they use, which mainly include double radius node labeling (DRNL) [18], distance encoding (DE) [19], partial zero-one labeling trick [16] and zero-one labeling trick [16]. As shown in [16], local methods with DRNL and DE perform better than other methods. However, one bottleneck for DRNL and DE is that they need to compute the shortest path distance (SPD) between target nodes and other nodes in subgraphs, and computing SPD is time-consuming during the training process [16]. Although we can compute SPD in the preprocessing step, costly storage overhead for subgraphs occurs instead. For global methods [15, 17, 18, 19, 18], they mainly apply different GNN models on the global graphs to generate node representation. Almost all existing GNN-LP methods, including both local methods and global methods, adopt a symmetric learning strategy which utilizes the same class of GNN models to learn representation for both head and tail nodes.
Non-GNN based Methods.Besides GNN-LP methods, WLNM [18] and SUREL [16] also show competitive performance in accuracy. The main difference between them and GNN-LP methods is that they do not apply GNN to learn from graphs or subgraphs. For example, WLNM applies an MLP model to learn subgraph representation from the adjacency matrices of extracted subgraphs. SUREL proposes an alternative sampler for subgraph extraction and applies a sequential model, like recurrent neural networks (RNNs), to learn subgraph representation.
In general, GNN-LP methods have higher accuracy than non-GNN based methods [16], but suffer from scalability problem during training for large-scale graphs. Although global GNN-LP methods are more efficient than local GNN-LP methods, they still have high computation complexity in training due to the adopted symmetric learning strategy. This motivates our work in this paper.
## 3 Asymmetric Learning for GNN-LP
Like most deep learning methods, GNN-LP methods are typically trained in a mini-batch manner. Suppose the number of links in the training set is \(|\mathcal{E}|\). Then existing GNN-LP methods with symmetric learning need to perform \(2|\mathcal{E}|\) times of GNN computation within each epoch. In particular, \(|\mathcal{E}|\) times of GNN computation are for head nodes and another \(|\mathcal{E}|\) times of GNN computation are for tail nodes. It is easy to verify that \(|\mathcal{E}|\) times of GNN computation are inevitable for both head and tail node representation learning with a symmetric learning strategy. Since GNN is of exponential computation complexity and \(|\mathcal{E}|\) is of a considerably large value, existing GNN-LP methods incur a huge computation burden for large-scale graphs.
To solve the scalability problem caused by symmetric learning, we propose AML which is illustrated in Figure 2. The main idea of AML is to adopt a GNN model for learning head node representation while using a multi-layer perceptron (MLP) model for learning tail node representation. Meanwhile,
AML pre-encodes graph structure to avoid information loss for MLP. The following content of this section will present the details of AML.
### Node Representation Learning with AML
We use \(\mathbf{U}{\in}\mathbb{R}^{N\times r}\) to denote the representation of all head nodes and use \(\mathbf{V}{\in}\mathbb{R}^{N\times r}\) to denote the representation of all tail nodes, where each row of \(\mathbf{U}\) and \(\mathbf{V}\) corresponds to a node. We apply a GNN model to learn representation for head nodes while using an MLP model to learn representation for tail nodes1.
Footnote 1: We can also apply a GNN model to learn representation for tail nodes while using an MLP model to learn representation for head nodes. For convenience, we only present the technical details of one case. The technical details for the reversed case are the same as the presented one.
We take SAGE [1], one of the most representative GNN models, as an example to describe the details. Note that other GNN models can also be used in AML. Let \(\hat{\mathbf{A}}\) denote the normalization of \(\mathbf{A}\), which can be row-normalization, column-normalization or symmetric normalization. Formulas for one layer in SAGE are as follows:
\[\mathbf{U}^{(\ell)}_{i*}=f\left(\hat{\mathbf{A}}_{i*}\mathbf{U}^{(\ell-1)} \mathbf{W}^{(\ell)}_{1}+\mathbf{U}^{(\ell-1)}_{i*}\mathbf{W}^{(\ell)}_{2} \right), \tag{1}\]
where \(\ell\) is layer number, \(f(\cdot)\) is an activation function, \(\mathbf{W}^{(\ell)}_{1}{\in}\mathbb{R}^{r\times r}\) and \(\mathbf{W}^{(\ell)}_{2}{\in}\mathbb{R}^{r\times r}\) are learnable parameters at layer \(\ell\). \(\mathbf{U}^{(\ell)}{\in}\mathbb{R}^{N\times r}\) is the node representation at layer \(\ell\) and \(\mathbf{U}^{(0)}{=}\mathbf{X}\). In (1), node \(i\) encodes neighbor information via \(\hat{\mathbf{A}}_{i*}\mathbf{U}^{(\ell-1)}\mathbf{W}^{(\ell)}_{1}\) and updates its own message together with \(\mathbf{U}^{(\ell-1)}_{i*}\mathbf{W}^{(\ell)}_{2}\).
Different from existing GNN-LP methods which adopt the same class of GNN models to learn representation for both head and tail nodes, AML applies an MLP model to learn representation for tail nodes. However, naively applying MLP for tail nodes will deteriorate the accuracy. Hence, as in [12, 13, 14], we first pre-encode graph structure information into node features in the preprocessing step. The pre-encoding step is as follows:
\[\tilde{\mathbf{V}}^{(0)}=\hat{\mathbf{A}}^{L}\mathbf{X}. \tag{2}\]
To further improve the representation learning for tail nodes, we propose to transfer knowledge from head nodes to tail nodes by sharing parameters. The formula is as follows:
\[\tilde{\mathbf{V}}^{(\ell)}=f\left(\tilde{\mathbf{V}}^{(\ell-1)}\mathbf{W}^{ (\ell)}_{1}+\tilde{\mathbf{V}}^{(\ell-1)}\mathbf{W}^{(\ell)}_{2}\right), \tag{3}\]
Figure 2: An illustration of AML. The MLP model \(\mathcal{M}_{1}\) is for transfering knowledge from head nodes to tail nodes by sharing parameters with the GNN model. The MLP model \(\mathcal{M}_{2}\) is for learning over the residual between \(\mathbf{A}^{L}\mathbf{X}\) and \(\mathbf{X}\). AML obtains the tail node representation, marked with the blue rectangle, by summing up the outcomes of \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\). AML obtains the head node representation, marked with the red rectangle, by summing up the tail node representation and the outcomes of the GNN model. The predictor is for generating predictions according to the input vector representation.
where we perform knowledge transfer between head nodes and tail nodes by sharing \(\mathbf{W}_{1}^{(\ell)}\) and \(\mathbf{W}_{2}^{(\ell)}\). Since sharing parameters somewhat restricts the expressiveness of \(\tilde{\mathbf{V}}^{(L)}\), we propose to apply an MLP model to learn over the residual \(\Delta^{(0)}\). Adding the residual \(\Delta^{(L)}\) to \(\tilde{\mathbf{V}}^{(L)}\), we obtain the representation for tail nodes which is shown as follows:
\[\Delta^{(0)} =\mathbf{X}-\hat{\mathbf{A}}^{L}\mathbf{X}, \tag{4}\] \[\Delta^{(\ell)} =f\left(\Delta^{(\ell-1)}\mathbf{W}^{(\ell)}\right),\] (5) \[\mathbf{V}^{(L)} =\tilde{\mathbf{V}}^{(L)}+\Delta^{(L)}, \tag{6}\]
where \(\mathbf{W}^{(\ell)}\in\mathbb{R}^{r\times r}\) is a learnable parameter at layer \(\ell\).
Note that if BatchNorm [15] is applied to \(\mathbf{U}^{(\ell)}\) and \(\tilde{\mathbf{V}}^{(\ell)}\), we keep individual parameters of BatchNorm for \(\mathbf{U}^{(\ell)}\) and \(\tilde{\mathbf{V}}^{(\ell)}\). Since \(\mathbf{U}^{(\ell)}\) and \(\tilde{\mathbf{V}}^{(\ell)}\) have different scales and lie in different representation spaces, it is reasonable to keep individual parameters of BatchNorm for them. Here pre-encoding step only needs to be performed once and the resulting \(\hat{\mathbf{A}}^{L}\mathbf{X}\) in (2) can be saved for the entire training process.
### Learning Objective of AML
According to the definitions in (1) and (6), modeling links with \(\mathbf{U}^{(L)}\) and \(\mathbf{V}^{(L)}\) can capture directed relations while being a little difficult to capture undirected relations. Our solutions are twofold. Firstly, we formulate each undirected link by two directed ones with opposite directions. Secondly, motivated by the work in [11], in AML we propose to model both _homophily_ and _stochastic equivalence_[10]. As a result, the formulas for the prediction of a pair \((i,j)\) are as follows:
\[\mathbf{U}=\mathbf{U}^{(L)}+\mathbf{V}^{(L)},\quad\mathbf{V}= \mathbf{V}^{(L)}, \tag{7}\] \[S_{ij}=f_{\mathbf{\Theta}}\left(\mathbf{U}_{i*}\odot\mathbf{V}_ {j*}\right), \tag{8}\]
where \(\mathbf{U}\) and \(\mathbf{V}\) are representation for head and tail nodes, respectively. \(\mathbf{V}^{(L)}\) in \(\mathbf{U}\) is included to model the _homophily_ feature in graph data. \(S_{ij}\) is the prediction for the pair \((i,j)\). \(\odot\) denotes element-wise multiplication. \(f_{\mathbf{\Theta}}(\cdot)\) is an MLP model with parameter \(\mathbf{\Theta}\).
Given node representation \(\mathbf{U}\) and \(\mathbf{V}\), the learning objective of link prediction is as follows:
\[\min_{\mathcal{W}}\frac{1}{|\mathcal{E}|}\sum_{(i,j)\in\mathcal{E}}f_{loss}(S_ {ij},Y_{ij})+\frac{\lambda}{2}\sum_{\mathbf{W}\in\mathcal{W}}\|\mathbf{W}\|_{ F}^{2}, \tag{9}\]
where \(\mathcal{E}\) denotes the training set. \(\mathcal{W}\) is the set of all learnable parameters. \(Y_{ij}\) is the ground-truth label for the pair \((i,j)\). \(f_{loss}(\cdot,\cdot)\) is a loss function, such as cross-entropy loss. \(\lambda\) is a coefficient for the regularization term of \(\mathcal{W}\). \(\|\cdot\|_{F}\) denotes the Frobenius norm of a matrix.
### Row-wise Sampling for Generating Mini-Batch
Like most deep learning methods, existing GNN-LP methods are typically trained in a mini-batch manner. But existing GNN-LP methods adopt an edge-wise sampling strategy to generate mini-batch for training. In particular, they first randomly sample a mini-batch of edges \(\mathcal{E}_{\Omega}\) from \(\mathcal{E}\) at each iteration and then optimize the objective function based on \(\mathcal{E}_{\Omega}\). For example, if we adopt the edge-wise sampling strategy for the objective function of AML in (9), the corresponding learning objective at each iteration will be as follows:
\[\min_{\mathcal{W}}\frac{1}{|\mathcal{E}_{\Omega}|}\sum_{(i,j)\in\mathcal{E}_{ \Omega}}f_{loss}(S_{ij},Y_{ij})+\frac{\lambda}{2}\sum_{\mathbf{W}\in\mathcal{W} }\|\mathbf{W}\|_{F}^{2}. \tag{10}\]
Suppose \(\mathcal{E}_{\Omega}\)=\(\{(i_{1},j_{1}),\cdots,(i_{B},j_{B})\}\) with \(B\) as the mini-batch size. We respectively use \(C\) and \(M\) to denote the computation complexity for generating a node representation by GNN and MLP. Since \(\mathcal{E}_{\Omega}\) is edge-wise randomly sampled from \(\mathcal{E}\) and \(N\) is of a large value for large-scale graphs, \((i_{1},\cdots,i_{B})\) will have a relative small number of repeated nodes. Then the edge-wise sampling strategy for AML has a computation complexity of \(\mathcal{O}(|\mathcal{E}|\cdot(C+M))\) for each epoch, which has the same order of magnitude in computation complexity as existing GNN-LP methods and hence is undesirable.
```
0:\(N\) (number of nodes in the input graph), \(L\) (number of model layers), \(\mathcal{E}\) (the training set), \(B\) (mini-batch size), \(T\) (maximum number of epoches).
0:\(\mathcal{W}\) (model parameters).
1:Pre-encoding graph structure by (2);
2:\(\tilde{B}=B/(|\mathcal{E}|/N)\);
3:for\(t=1:T\)do
4:for\(q=1:(N/\tilde{B})\)do
5: Sample \(\mathcal{V}_{\Omega}\) of \(\tilde{B}\) head nodes from \(\{1,\cdots,N\}\);
6: Generate \(\mathcal{E}_{\Omega}\) with \(\mathcal{V}_{\Omega}\) by (11);
7: Compute \(\mathbf{U}_{i*}^{(L)}\) for each node \(i\) in \(\mathcal{V}_{\Omega}\) by (1);
8: Compute \(\mathbf{V}_{j*}^{(L)}\) for each node \(j\) in \(\mathcal{E}_{\Omega}\) by (3)-(6);
9: Compute \(\mathbf{U}_{i*}\) and \(\mathbf{V}_{j*}\) for each \((i,j)\) in \(\mathcal{E}_{\Omega}\) by (7);
10: Compute \(S_{ij}\) for each \((i,j)\) in \(\mathcal{E}_{\Omega}\) by (8);
11: Update model parameters \(\mathcal{W}\) by optimizing (10); /*Note that the objective function of row-wise sampling is the same as that of edge-wise sampling in (10).*/
12:endfor
13:endfor
```
**Algorithm 1** Learning Algorithm for AML
To solve this high computation complexity problem of edge-wise sampling strategy adopted by existing GNN-LP methods, in AML we propose a row-wise sampling strategy for generating mini-batch. More specifically, we first sample a number of row indices \(\mathcal{V}_{\Omega}\) (head nodes) from \(\{1,2,\cdots,N\}\) for each mini-batch iteration. Then we construct the mini-batch \(\mathcal{E}_{\Omega}\) as follows:
\[\mathcal{E}_{\Omega}=\{(i,j)|i\in\mathcal{V}_{\Omega}\wedge(i,j)\in\mathcal{E}\}. \tag{11}\]
By using this row-wise sampling strategy, AML has a computation complexity of \(\mathcal{O}(|\mathcal{V}_{\Omega}|\cdot C+(|\mathcal{V}_{\Omega}|/N)|\mathcal{ E}|\cdot M)\) for each mini-batch iteration. Since \(\mathcal{V}_{\Omega}\) iterates over \(\{1,\cdots,N\}\) for \(N/|\mathcal{V}_{\Omega}|\) times to go through the whole training set, this row-wise sampling strategy for AML has a computation complexity of \(\mathcal{O}(N\cdot C+|\mathcal{E}|\cdot M)\) for each epoch. Therefore, the row-wise sampling strategy enables AML to decouple the factor of \(|\mathcal{E}|\) from the computation complexity of GNN, leading to a complexity reduction by orders of magnitude compared to the edge-wise sampling strategy.
Algorithm 1 summarizes the whole learning algorithm for AML.
### Complexity Analysis
The computation complexity for different methods are summarized in Table 1. For large-scale graphs, we often have \(s^{L}{<}N\) and \(s^{k}{<}N\). Typically, \(s^{L}r^{2}\) has the same order of magnitude as \(Ls^{k}r^{2}\). According to (1), GNN has a computation complexity of \(C{=}\mathcal{O}\left(s^{L}\cdot r^{2}\right)\) to generate a node representation while the corresponding computation complexity of MLP is \(M{=}\mathcal{O}\left(L\cdot r^{2}\right)\) according to (3)-(6). It is easy to
\begin{table}
\begin{tabular}{l c} \hline Method & Computation complexity \\ \hline Local GNN-LP & \(\mathcal{O}\left(2Ls^{k}r^{2}\cdot|\mathcal{E}|\right)\) \\ Local GNN-LP (w/ RWS) & \(\mathcal{O}\left(2Ls^{k}r^{2}\cdot|\mathcal{E}|\right)\) \\ Global GNN-LP & \(\mathcal{O}\left(2s^{L}r^{2}\cdot|\mathcal{E}|\right)\) \\ Global GNN-LP (w/ RWS) & \(\mathcal{O}\left(s^{L}r^{2}\cdot(|\mathcal{E}|+N)\right)\) \\ \hline AML (w/o RWS) & \(\mathcal{O}\left(\left(s^{L}r^{2}+Lr^{2}\right)\cdot|\mathcal{E}|\right)\) \\ AML & \(\mathcal{O}\left(s^{L}r^{2}\cdot N+Lr^{2}\cdot|\mathcal{E}|\right)\) \\ \hline \end{tabular}
\end{table}
Table 1: Complexity analysis. \(L\) is the number of model layers. \(s{=}\|\mathbf{A}\|_{0}/N\) is the average number of neighbors for a node in \(\mathcal{G}\). \(r\) is the dimension of node representation. \(|\mathcal{E}|\) is the number of links in the training set \(\mathcal{E}\). \(N\) is the number of nodes in graph \(\mathcal{G}\). \(k\) in local GNN-LP is the number of hops for the enclosed subgraphs. “RWS” denotes row-wise sampling. “w/ RWS” in local GNN-LP and global GNN-LP means generating mini-batch with RWS. “w/o RWS” in AML means generating mini-batch without RWS but with an edge-wise sampling strategy.
verify that \(M{\ll}C\). Although many works [17, 19, 20, 21] have proposed solutions to reduce \(C\), \(C\) is still much larger than \(M\). Here we suppose all methods are trained in a mini-batch manner which has been adopted by almost all deep learning models including GNN.
From Table 1, we can get the following results. Firstly, AML has a computation complexity of \(\mathcal{O}(N{\cdot}C{+}|\mathcal{E}{\cdot}M)\), which is much lower than \(\mathcal{O}(2|\mathcal{E}{\cdot}C)\) of existing GNN-LP methods. Secondly, even with our proposed row-wise sampling strategy, existing GNN-LP methods still have a computation complexity of \(\mathcal{O}((|\mathcal{E}{+}N){\cdot}C)\), without change in the order of magnitude. The reason is that they still need to perform \(|\mathcal{E}|\) times of GNN computation for tail nodes within each epoch.
## 4 Experiments
In this section, we evaluate AML and baselines on three real datasets. All methods are implemented with Pytorch [19] and Pytorch-Geometric Library [18]. All experiments are run on an NVIDIA RTX A6000 GPU server with 48 GB of graphics memory.
### Settings
DatasetsDatasets for evaluation include ogbl-collab2, ogbl-ppa and ogbl-citation23. The first two are medium-scale datasets with hundreds of thousands of nodes. The last one is a large-scale dataset with millions of nodes. For ogbl-ppa, since the provided node features are uninformative, we apply matrix factorization [17] to generate new features for nodes. The first two datasets are for undirected link prediction, while the last one is for directed link prediction. The statistics of datasets are summarized in Table 2. Since most GNN-LP methods adopt the evaluation metrics provided by [19], we also follow these evaluation settings.
Footnote 2: In ogbl-collab, there are data leakage issues in the provided graph adjacency matrix \(\mathbf{A}\). We remove those positive links in the validation and testing set from \(\mathbf{A}\).
Footnote 3: [https://ogb.stanford.edu/docs/linkprop/](https://ogb.stanford.edu/docs/linkprop/)
BaselinesAML is actually a global GNN-LP method. We first compare AML with existing global GNN-LP baselines by adopting the same GNN for both AML and baselines. Since almost all existing global GNN-LP methods are developed based on the graph autoencoder framework proposed in [16], we mainly adopt the GNNs under the graph autoencoder framework. In particular, we adopt SAGE [17] and GAT [20] as the GNNs for both AML and baselines, because SAGE and GAT are respectively representative non-attention based and attention based GNN models under the graph autoencoder framework.
Then we compare AML with non-GNN baselines and local GNN-LP baselines. Non-GNN baselines include common neighbors (CN) [14], Adamic-Adar (AA) [1], Node2vec [19] and matrix factorization (MF) [18]. Local GNN-LP baselines inlcude DE-GNN [17] and SEAL [19, 20]. For local GNN-LP methods, we extract enclosed subgraphs in an online way to simulate large-scale settings by following the original work [20].
Hyper-parameter SettingsHyper-parameters include \(L\) (layer number), \(r\) (hidden dimension), \(\lambda\) (coefficient for the regularization of parameters), \(T\) (maximum number of epoches), \(\eta\) (learning rate) and \(B\) (mini-batch size). On ogbl-collab, \(L\)=3, \(r\)=256, \(\lambda\)=0, \(T\)=400, \(\eta\)=0.001 and \(B\)=65,536.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Datasets & ogbl-collab & ogbl-ppa & ogbl-citation2 \\ \hline \#Nodes & 235,868 & 576,289 & 2,927,963 \\ \#Edges & 1,285,465 & 30,326,273 & 30,561,187 \\ Features/Node & 128 & 128 & 128 \\ \#Training links & 1,179,052 & 21,231,931 & 30,387,995 \\ \#Validation links & 160,084 & 9,062,562 & 86,682,596 \\ \#Test links & 146,329 & 6,031,780 & 86,682,596 \\ Metric & Hits@50 & Hits@100 & MRR \\ \hline \hline \end{tabular}
\end{table}
Table 2: Statistics of datasets.
On ogbl-ppa, \(L\)=3, \(r\)=256, \(\lambda\)=0, \(T\)=50, \(\eta\)=0.01 and \(B\)=65, 536. On ogbl-citation2, \(L\)=3, \(r\)=256, \(\lambda\)=0, \(T\)=50, \(\eta\)=0.005 and \(B\)=65, 536. We use Adam [1] as the optimizer. We use GraphNorm [1] to accelerate the training. We adopt BNS [13] as the neighbor sampling strategy for large-scale training. In BNS, there are three hyper-parameters, including \(\tilde{s}_{0}\), \(\tilde{s}_{1}\) and \(\delta\). \(\tilde{s}_{0}\) denotes the number of sampled neighbors for nodes at output layer. \(\tilde{s}_{1}\) denotes the number of sampled neighbors for nodes at other layers. \(\delta\) denotes the ratio of blocked neighbors. On ogbl-collab, \(\tilde{s}_{0}\)=7, \(\tilde{s}_{1}\)=2, \(\delta\)=1/2. On ogbl-ppa, \(\tilde{s}_{0}\) equals the number of all neighbors, \(\tilde{s}_{1}\)=4, \(\delta\)=5/6. On ogbl-citation2, \(\tilde{s}_{0}\)=7, \(\tilde{s}_{1}\)=2, \(\delta\)=1/2. We run each setting for 5 times and report the mean with standard deviation.
### Comparison with Global GNN-LP Baselines
Comparison with global GNN-LP baselines is shown in Table 3 and Figure 3. According to the results, we can draw the following conclusions. Firstly, AML has almost no accuracy loss in all cases compared to baselines4. For example, AML's accuracy is within the standard deviation of baselines' accuracy on ogbl-ppa and ogbl-citation2. Instead, AML achieves an accuracy gain of 1.17%\(\sim\)2.69% on ogbl-collab compared to baselines. Secondly, AML is about 1.7\(\times\)\(\sim\)7.3\(\times\) faster in training than baselines when achieving almost no accuracy loss. For example, AML is 1.7\(\times\) faster than baselines on ogbl-collab, 3.1\(\times\) faster on ogbl-ppa, and 7.3\(\times\) faster on ogbl-citation2. In particular, baselines need about 5.8 days and 4.2 days to get the mean of results while AML only needs 1.7 days and 0.6 days to get the mean of results, on ogbl-ppa and ogbl-citation2 respectively. Thirdly, the speedup of AML relative to baselines increases with the size of graphs. For example, the number of nodes in ogbl-collab, ogbl-ppa and ogbl-citation2 increases in an ascending order, and the speedup of AML relative to baselines increases in a consistent order on these three graph datasets. Finally, AML has a better accuracy-time trade-off than baselines, which can be concluded from Figure 3. For example, we can find that AML is faster than baselines when achieving the same accuracy.
\begin{table}
\end{table}
Table 3: Comparison with global GNN-LP baselines. “Time” denotes the whole time to complete training. “Gap” denotes the accuracy of AML minus that of baselines.
### Comparison with non-GNN and Local GNN-LP Baselines
Comparison with non-GNN and local GNN-LP baselines is shown in Table 4. According to the results, we can draw the following conclusions. Firstly, AML is comparable with state-of-the-art local GNN-LP methods in accuracy. For example, AML has almost no accuracy loss compared to the best baseline DE-GNN on obgl-collab. On ogbl-ppa, AML achieves an accuracy gain of 3.43% compared to the best baseline SEAL. On ogbl-citation2, AML gets an accuracy loss less than 1% compared to the best baseline SEAL. Secondly, AML is about 13\(\times\sim\)110\(\times\) faster than local GNN-LP baselines. For example, AML is about 13\(\times\) faster than DE-GNN and SEAL on ogbl-collab, about 66\(\times\) faster on ogbl-ppa, and about 110\(\times\) faster on ogbl-citation2. In particular, DE-GNN and SEAL need 3.5 days to get the mean accuracy on ogbl-collab which is a relatively small-scale dataset. By contrast, AML only needs 1.2 hours to get the mean accuracy. Finally, GNN-LP methods can achieve better accuracy than non-GNN methods. For example, DE-GNN improves by 13% in accuracy over MF on ogbl-ppa and improves by 17% over Node2vec on ogbl-citation2. AML improves by 17% in accuracy over MF on ogbl-ppa and improves by 16% over Node2vec on ogbl-citation2.
\begin{table}
\begin{tabular}{l l l c c c c} \hline \hline \multirow{2}{*}{Category} & \multirow{2}{*}{Methods} & \multicolumn{2}{c}{ogbl-collab} & \multicolumn{2}{c}{ogbl-ppa} & \multicolumn{2}{c}{ogbl-citation2} \\ \cline{3-8} & & Hits@50(\%)\(\uparrow\) & Time(s)\(\downarrow\) & Hits@100(\%)\(\uparrow\) & Time(s)\(\downarrow\) & MRR(\%)\(\uparrow\) & Time(s)\(\downarrow\) \\ \hline \multirow{4}{*}{Non-GNN} & CN & 49.96\(\pm\)0.00\({}^{\bullet}\) & - & 27.60\(\pm\)0.00 & - & 51.47\(\pm\)0.00 & - \\ & AA & 56.49\(\pm\)0.00\({}^{\bullet}\) & - & 32.45\(\pm\)0.00 & - & 51.89\(\pm\)0.00 & - \\ & Node2vec & 49.29\(\pm\)0.64\({}^{\bullet}\) & - & 22.26\(\pm\)0.88 & - & 61.41\(\pm\)0.11 & - \\ & MF & 37.93\(\pm\)0.76\({}^{\bullet}\) & - & 32.29\(\pm\)0.94 & - & 51.86\(\pm\)4.43 & - \\ \hline \multirow{2}{*}{Local GNN-LP} & DE-GNN & **57.87\(\pm\)0.79\({}^{\bullet}\)** & \(6.1\times 10^{4}\) & 45.70\(\pm\)3.46 & \(2.0\times 10^{6}\) & 78.85\(\pm\)0.17 & \(1.1\times 10^{6}\) \\ & SEAL & 57.55\(\pm\)0.72\({}^{\bullet}\) & \(6.5\times 10^{4}\) & 48.80\(\pm\)3.16 & \(2.0\times 10^{6}\) & **87.67\(\pm\)0.32** & \(1.1\times 10^{6}\) \\ \hline \multirow{2}{*}{Global GNN-LP} & AML (S) & 57.26\(\pm\)1.25 & **4.4 \(\times\) 10\({}^{\bf 8}\)** & 49.73\(\pm\)0.89 & **3.0 \(\times\) 10\({}^{\bf 4}\)** & 86.55\(\pm\)0.06 & **1.0 \(\times\) 10\({}^{\bf 4}\)** \\ & AML (G) & 57.60\(\pm\)0.71 & 4.5 \(\times\) 10\({}^{\bf 3}\) & **50.23\(\pm\)0.78** & \(3.2\times\) 10\({}^{\bf 4}\) & 86.70\(\pm\)0.05 & **1.0 \(\times\) 10\({}^{\bf 4}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison with non-GNN and local GNN-LP baselines. “(S)” in AML means using SAGE as the GNN model, and “(G)” in AML means using GAT as the GNN model. Accuracy of non-GNN and local GNN-LP baselines on ogbl-ppa and ogbl-citation2 is from [ZLX\({}^{+}\)21]. “**” denotes the results achieved by rerunning the authors’ code on the clean ogbl-collab.
Figure 3: Test accuracy-time curves of AML and global GNN-LP baselines.
### Necessity of GNN in AML
Here we perform experiment to verify the necessity of GNN in AML. We design a method called symmetric MLP (SMLP) which applies MLP with pre-encoding to learn representation for both head nodes and tail nodes. More specifically, SMLP adopts the techniques for learning tail node representation in AML, i.e., (2) and (3), to learn representation for both head and tail nodes.
Results are shown in Table 5. We can find that SMLP is much worse than AML on all datasets. This shows that training a GNN model for node representation learning is necessary to achieve high accuracy.
### Reversed Asymmetric Learning
In above experiments, AML learns representation for head nodes with a GNN model while learning representation for tail nodes with an MLP model. Here we verify whether the reversed case can also behave well. We denote the reversed case as AML-R, which learns representation for head nodes with an MLP model while learning representation for tail nodes with a GNN model. Results are shown in Table 6. Results show that AML and AML-R have similar accuracy.
### Ablation Study
In this subsection, we study the effectiveness of different components in AML, including knowledge transfer, residual term \(\Delta^{(L)}\), pre-encoding graph structure and modeling the homophily. Results are shown in Table 7. We can find the following phenomenons. Firstly, knowledge transfer between head and tail nodes effectively improves the accuracy of AML. For example, knowledge transfer can improve the accuracy of AML by 3.00% on ogbl-collab, by 2.85% on ogbl-ppa and by 0.42% on ogbl-citation2. Secondly, \(\Delta^{(L)}\) is beneficial for AML. For example, including \(\Delta^{(L)}\) in AML can improve accuracy by 3.04% on ogbl-collab, by 3.19% on ogbl-ppa and by 0.40% on ogbl-citation2. Thirdly, modeling homophily is helpful for AML. For example, modeling homophily in AML can improve accuracy by 1.87% on ogbl-collab, by 2.37% on ogbl-ppa and by 0.87% on ogbl-citation2. Finally, pre-encoding graph structure plays a crucial role in AML. For example, AML has an accuracy loss of about 13% on ogbl-collab, 47% on ogbl-ppa and 25% on ogbl-citation2 without pre-encoding graph structure.
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Methods} & ogbl-collab & ogbl-ppa & ogbl-citation2 \\ \cline{2-4} & Hits@50 (\%) \(\uparrow\) & Hits@100 (\%) \(\uparrow\) & MRR (\%) \(\uparrow\) \\ \hline AML-R (S) & 57.15 \(\pm\) 0.32 & 49.73 \(\pm\) 0.45 & 85.70 \(\pm\) 0.10 \\ AML (S) & 57.26 \(\pm\) 1.25 & 49.73 \(\pm\) 0.89 & 86.55 \(\pm\) 0.06 \\ AML-R (G) & 57.08 \(\pm\) 1.19 & **50.30 \(\pm\) 0.61** & 85.91 \(\pm\) 0.04 \\ AML (G) & **57.60 \(\pm\) 0.71** & 50.23 \(\pm\) 0.78 & **86.70 \(\pm\) 0.05** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Reversed asymmetric learning. AML-R denotes the reversed case of AML, which learns representation for head nodes with an MLP model while learning representation for tail nodes with a GNN model. “(S)” means using SAGE as the GNN model, and “(G)” means using GAT as the GNN model.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c}{ogbl-collab} & \multicolumn{2}{c}{ogbl-ppa} & \multicolumn{2}{c}{ogbl-citation2} \\ \cline{2-7} & Hits@50 (\%) \(\uparrow\) & Gap & Hits@100 (\%) \(\uparrow\) & Gap & MRR (\%) \(\uparrow\) & Gap \\ \hline SMLP & 47.25 \(\pm\) 0.89 & - & 47.42 \(\pm\) 1.37 & - & 69.82 \(\pm\) 0.05 & - \\ \hline AML (S) & 57.26 \(\pm\) 1.25 & +10.01 & 49.73 \(\pm\) 0.89 & +2.31 & 86.55 \(\pm\) 0.06 & +16.73 \\ AML (G) & **57.60 \(\pm\) 0.71** & +10.35 & **50.23 \(\pm\) 0.78** & +2.81 & **86.70 \(\pm\) 0.05** & +16.88 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Experiment to verify the necessity of GNN in AML. SMLP applies MLP with pre-encoding to learn representation for both head nodes and tail nodes. “Gap” denotes the accuracy of AML minus that of SMLP.
## 5 Conclusions
Graph neural network based link prediction (GNN-LP) methods have achieved better accuracy than non-GNN based link prediction methods, but suffer from scalability problem for large-scale graphs. Our computation complexity analysis reveals that the scalability problem of existing GNN-LP methods stems from their symmetric learning strategy for node representation learning. Motivated by this finding, we propose a novel method called AML for GNN-LP. To the best of our knowledge, AML is the first GNN-LP method adopting an asymmetric learning strategy for node representation learning. Extensive experiments show that AML is significantly faster than baselines with a symmetric learning strategy while having almost no accuracy loss.
|
2304.09695 | Big-Little Adaptive Neural Networks on Low-Power Near-Subthreshold
Processors | This paper investigates the energy savings that near-subthreshold processors
can obtain in edge AI applications and proposes strategies to improve them
while maintaining the accuracy of the application. The selected processors
deploy adaptive voltage scaling techniques in which the frequency and voltage
levels of the processor core are determined at the run-time. In these systems,
embedded RAM and flash memory size is typically limited to less than 1 megabyte
to save power. This limited memory imposes restrictions on the complexity of
the neural networks model that can be mapped to these devices and the required
trade-offs between accuracy and battery life. To address these issues, we
propose and evaluate alternative 'big-little' neural network strategies to
improve battery life while maintaining prediction accuracy. The strategies are
applied to a human activity recognition application selected as a demonstrator
that shows that compared to the original network, the best configurations
obtain an energy reduction measured at 80% while maintaining the original level
of inference accuracy. | Zichao Shen, Neil Howard, Jose Nunez-Yanez | 2023-04-19T14:36:30Z | http://arxiv.org/abs/2304.09695v1 | # Big-Little Adaptive Neural Networks on Low-Power Near-Subthreshold Processors
###### Abstract
This paper investigates the energy savings that near-subthreshold processors can obtain in edge AI applications and proposes strategies to improve them while maintaining the accuracy of the application. The selected processors deploy adaptive voltage scaling techniques in which the frequency and voltage levels of the processor core are determined at the run-time. In these systems, embedded RAM and flash memory size is typically limited to less than 1 megabyte to save power. This limited memory imposes restrictions on the complexity of the neural networks model that can be mapped to these devices and the required trade-offs between accuracy and battery life. To address these issues, we propose and evaluate alternative 'big-little' neural network strategies to improve battery life while maintaining prediction accuracy. The strategies are applied to a human activity recognition application selected as a demonstrator that shows that compared to the original network, the best configurations obtain an energy reduction measured at 80% while maintaining the original level of inference accuracy.
a 2022 2022 2022 2022 2022 2022 2022 2022
## 1 Introduction
Over the past few decades, the rapid development of the Internet of Things (IoT) and deep learning has increased the demand for deploying deep neural networks (DNNs) to low-power devices [1]. Due to high latency and privacy issues, cloud computing tasks are gradually being transferred to the edge in areas such as image recognition and natural language processing [2]. The limitations in memory size and computing power mean that large neural networks with millions of parameters cannot be easily deployed on edge devices such as microcontroller units (MCUs), which in many cases have less than one megabyte of flash memory capacity [1, 2]. Memory is kept low to save costs and reduce power usage since power gating memory blocks that are not in use is not a feature available in these devices.
Maximizing device usage time is an important goal and, focusing on this objective, we investigate an adaptive 'big-little' neural network system which consists of a big network and multiple little networks to achieve energy-saving inference by limiting the number of big network executions without degrading accuracy. We call this organization 'big-little' since it draws inspiration from the 'big-little' technology popularized by ARM that combines complex and light processors in a single SoC. Our big network has better accuracy but with a longer inference time, while the little networks have a faster inference speed. Most of the time, the big network remains in sleeping mode and it is only activated when the little network determines that it cannot handle the work at the required level of confidence.
In this research, we focus on establishing and deploying the complete adaptive neural network system on the edge device. We investigate how to manage the primary and secondary networks to have a faster, more accurate, and more energy-efficient performance using a human activity recognition (HAR) application as a popular example of an edge application. The contribution of this research is summarized below:
* We evaluate state-of-the-art near-threshold processors with adaptive voltage scaling and compare them to a standard edge processor.
* We optimize a popular edge application targeting a human activity recognition (HAR) model based on _TensorFlow_ for MCU deployment using different vendor toolchains and compilers.
* We propose novel 'big-little' strategies suitable for adaptive neural network systems achieving fast inference and energy savings.
* We made our work open source at [https://github.com/DarkSZChao/Big-Little_NN_Strategies](https://github.com/DarkSZChao/Big-Little_NN_Strategies) (accessed on 9 March 2022) to further promote work in this field.
This paper is organized as follows. In Section 2, we present an overview of the state-of-the-art hardware for low-power edge AI, frameworks and relevant algorithmic techniques. Then, an initial evaluation in terms of performance and energy cost in near-threshold MCUs and standard MCUs was carried out in Section 3. In Section 4, we propose and evaluate three different configurations of adaptive neural network systems with different features and performance characteristics. Section 5 describes and demonstrates the implementation steps needed to target the selected low-power MCUs. The results obtained in terms of speed, accuracy and energy are presented in Section 6. Finally, the conclusions and future work are discussed in Section 7.
## 2 Background and Related Work
In this section, we present an overview of current state-of-the-art hardware with power profiles in the order of 1 watt or less for edge AI and then algorithmic techniques and frameworks optimized to target this hardware.
### Hardware for Low-Power Edge AI
The high demand for AI applications at the edge has resulted in a significant increase in hardware optimized for low-power levels. For example, Google has delivered a light version of the Tensor Processing Unit (TPU) called Edge TPU which is able to provide power-efficient inference at 2 trillion MAC operations per second per watt (2TMAC/s/W) [3]. This state-of-the-art device is able to execute mobile version models such as MobileNet V2 at almost 400 FPS. The Cloud TPU focuses on training complex models, while the Edge TPU is designed to perform inference in low-power systems. Targeting significantly lower power than the Edge TPU, Ambiq released the Apollo family of near-threshold processors based on the 32-bit ARM Cortex-M4F processor. These devices can reach much lower energy usage measured at only 6 uA/MHz at 3.3 V under the working mode, and 1 uA/MHz at 3.3 V under sleep mode. The Apollo3 device present in the SparkFun board has 1 MB of flash memory and 384 KB of low-leakage RAM [4]. Similarly, Eta Compute has targeted energy-efficient endpoint AI solutions with the ECM3532 processor. This device is based on an ARM Cortex-M3 32-bit CPU and a separate CoolFlux DSP to speed up machine learning operations in an energy-efficient manner. The ECM3532 available in the AI vision board consumes less than 5 uA/MHz in normal working mode and 1 uA/MHz in sleep mode. According to Eta Compute, its implementation of self-timed continuous voltage and frequency scaling technology (CVFS) achieves a power profile of just 1 mW [5,6]. A characteristic of these near-threshold devices is that voltage scaling is applied to the core but it is not applied to the device's SRAM/flash due to the limited margining possible in memory cells.
Both Apollo3 and ECM3532 are based on the popular ARM architecture but, lately, the open-source instruction set architecture RISC-V has also received significant attention in this field. For example, GAP8 developed by GreenWaves Technologies features an 8-core compute cluster of RISC-V processors and an additional CNN accelerator [7]. The compute cluster is coupled with an additional ultra-low power MCU with 30 \(\upmu\)W state-retentive sleep power for control and communication functions. For CNN inference (90 MHz, 1.0 V), GAP8 delivers an energy efficiency of 600 GMAC/s/W and a worst-case power envelope of 75 mW [7].
Other examples of companies exploring the near-threshold regime include Minima who has been involved in designs demonstrating achievable power savings [8]. Minima offers ultra-wide dynamic voltage and frequency scaling (DVFS) which is able to scale frequency and/or operating voltage based on the workload. This approach, combined with the dynamic margining approach from both Minima and ARM, is able to save energy by up to 15\(\times\) to 20\(\times\)[9]. The interest for adaptive voltage scaling hardware has resulted in a \(\upxi\)100 m European project led by STMicroelectronics to develop the next generation of edge AI microcontrollers and software using low-power FD-SOI and phase change technology. This project aims to deliver the chipset and solutions for the automotive and industrial markets with a very high computing capacity of 10 TOPS per watt, which is significantly more powerful than existing microcontrollers [10].
### Algorithmic Techniques for Low-Power Edge AI
Over the years, different algorithmic approaches have appeared to optimize inference on edge devices with a focus on techniques such as quantization, pruning, heterogeneous models and early termination. The deep quantization of network weights and activations is a well-known approach to optimize network models for edge deployments [11; 12]. Examples include [13], which uses extremely low precision (e.g., 1-bit or 2-bits) of weights and activations achieving 51% top-1 accuracy and seven times the speedup in AlexNet [13]. The authors of [14] demonstrate a binarized neural network (BNN) where both weights and activations are binarized. During the forward pass, a BNN drastically reduces memory accesses and replaces most arithmetic operations with bit-wise operations. Ref. [14] has proven that, by using their binary matrix multiplication kernel, the results achieve 32 times the compression ratio and improves performance by seven times with MNIST, CIFAR-10 and SVHN data sets. However, substantial accuracy loss (up to 28.7%) has been observed by [15]. The research in [15] has addressed this drawback by deploying a full-precision norm layer before each Conv layer in XNOR-Net. XNOR-Net applies binary values to both inputs and convolutional layer weights and it is capable of reducing the computation workload by approximately 58 times, with 10% accuracy loss in ImageNet [15]. Overall, these networks can free edge devices from the heavy workload caused by computations using integer numbers, but the loss of accuracy needs to be properly managed. This reduction in accuracy loss has been improved in CoopNet [16]. Similar to the concept of multi-precision CNN in [17], CoopNet [16] applies two convolutional models: a binary net BNN with faster inference speed and an integer net INT8 with relatively high accuracy to balance the model's efficiency and accuracy. On low-power Cortex-M MCUs with limited RAM (\(\leq\) 1 MB), Ref. [16] achieved around three times the compression ratio and 60% of the speed-up while maintaining an accuracy level higher than the CIFAR-10, GSC and FER13 datasets. In contrast to CoopNet which applies the same network structures for primary and secondary networks, we apply a much simpler structure for secondary networks in which each of them is trained to identify one category in the HAR task. This optimization results in a configuration that can achieve around 80% speed-up and energy-saving with a similar accuracy level across all the evaluated MCU platforms. Based on XNOR-Net, Ref. [18] constructed a pruned-permuted-packed network that combines binarization with sparsity to push model size reduction to very low limits. On the Nucleo platforms
and Raspberry Pi, 3PXNet achieves a reduction in the model size by up to 38\(\times\) and an improvement in runtime and energy of 25\(\times\) compared to already compact conventional binarized implementations with a reduction in accuracy of less than 3%. TF-Net is an alternative method that chooses ternary weights and four-bit inputs for DNN models. Ref. [19] provides this configuration to achieve the optimal balance between model accuracy, computation performance, and energy efficiency on MCUs. They also address the issue that ternary weights and four-bit inputs cannot be directly accessed due to memory being byte-addressable by unpacking these values from the bitstreams before computation. On the STM32 Nucleo-F411RE MCU with an ARM Cortex-M4, Ref. [19] achieved improvements in computation performance and energy efficiency of 1.83\(\times\) and 2.28\(\times\), respectively. Thus, 3PXNet/TF-Net can be considered orthogonal to our 'big-little' research since they could be used as alternatives to the 8-bit integer models considered in this research. A related architecture to our approach called BranchyNet with early exiting was proposed in [20]. This architecture has multiple exits to reduce layer-by-layer weight computation and I/O costs, leading to fast inference speed and energy saving. However, due to the existence of multiple branches, it suffers from a huge number of parameters, which would significantly increase the memory requirements in edge devices.
The configuration of primary and secondary neural networks has been proposed for accelerating the inference process on edge devices in recent years. Ref. [17; 21] constructed 'big' and 'little' networks with the same input and output data structure. The 'big' network is triggered by their score metric generated from the 'little' network. A similar configuration has also been proposed by [22], but their 'big' and 'little' networks are trained independently. 'Big' and 'little' networks do not share the same input and output data structure. Ref. [22] proposed a heterogeneous setup deploying a 'big' network on state-of-the-art edge neural accelerators such as NCS2, with a 'little' network on near-threshold processors such as ECM3531 and Apollo3. Ref. [22] has successfully achieved 93% accuracy and low energy consumption of around 4 J on human activity classification tasks by switching this heterogeneous system between 'big' and 'little' networks. Ref. [22] considers heterogeneous hardware, whereas our approach uses the 'big-little' concept but focuses on deploying all the models on a single MCU device. In contrast to how [22] deployed 'big' and 'little' models on the NCS2 hardware accelerator and near-threshold processors separately, we deploy both neural network models on near-threshold MCU for activity classification tasks. A switching algorithm is set up to switch between 'big' and 'little' network models to achieve much lower energy costs but maintain a similar accuracy level. A related work [23] has performed activity recognition tasks with excellent accuracy and performance by using both convolutional and long short-term memory (LSTM) layers. Due to the flash memory size of MCU, we decided not to use the LSTM layers which have millions of parameters as shown in [23]. The proposed adaptive system is suitable for real-world tasks such as human activity classification in which activities do not change at very high speeds. A person keeps performing one action for a period of time, typically in the order of tens of seconds [24], which means that to maintain the system at full capacity (using the primary 'big' network to perform the inference) is unnecessary. Due to the additional inference time and computation consumed by the primary network, the fewer the number of times the primary network gets invoked, the faster the inference process will be and the lower the energy requirements [16; 17; 21; 22].
### Frameworks for Low-Power Edge AI
Over the last few years, a number of frameworks have appeared to ease the deployment of neural network models on edge devices with limited resources. In [25], a framework is provided called FANN-on-MCU specifically for the fast deployment of multi-layer perceptrons (MLPs) on low-power MCUs. This framework supports not only the very popular ARM Cortex-M series MCUs, but also the RISC-V parallel ultra-low power (PULP) processors. The results
in [25] show that the PULP-based 'Mr.Wolf' SoC can reach up to 7.1\(\times\) the speedup with respect to a single core implementation and 13.5\(\times\) the speedup over the ARM Cortex-M4. Moreover, by using FANN-on-MCU, a relatively big neural network with 103,800 MAC operations can be executed within 17.6 ms with an energy consumption of 183 \(\upmu\)m on a Nordic nRF52832 MCU with one ARM Cortex-M4. The same neural network applied on 'Mr.Wolf' with eight RISC-V-based RI5CY cores takes less than 1ms to consume around 50 \(\upmu\)J [25]. Similar to FANN-on-MCU, Ref. [26] delivers a fast deployment on the MCU framework called the neural network on microcontroller (_NNoM_) which supports more complex model topologies such as ResNet and DenseNet from Keras. A user-friendly API and high-performance backend selections have been built for embedded developers to deploy Keras models on low-power MCU devices. There are also deployment frameworks developed by commercial companies targeting low-power edge devices. For example, Google focuses on low-power edge AI with the popular _TensorFlow Lite_ framework [27]. Coupled with the model training framework _TensorFlow_, Google can provide a single solution from neural network model training to model deployment on edge devices. _STM32Cube.AI_ from STMicroelectronics [28] is also an AI deployment framework but it is only designed around the STM family devices such as STM32 Nucleo-L4RSZI and STM32 Nucleo-F411RE. Eta Compute has created the _TENSAIFlow_ deployment framework to provide performance and efficiency optimizations for Eta-series MCU products such as ECM3531 and ECM3532 [29]. In our methodology, the lack of support for certain devices in some frameworks means that we have combined tools from different vendors. We have applied frameworks from [26; 27; 29] for model deployments on MCUs such as ECM3532 and STM32L4 (see Section 5 for details).
## 3 Low-Power Microcontroller Evaluation
Four commercially available microcontroller devices designed for energy-efficient applications from STMicroelectronics, Ambiq and Eta Compute are considered in this comparison. Table 1 shows the technical details of these four MCUs. Three of them (STM32L4RSZI, Apollo2 Blue and SparkFun Edge (Apollo3 Blue)) are based on the Cortex-M4 microarchitecture with floating-point units (FPU) [4; 30; 31], while the ECM3532 is based on the Cortex-M3 microarchitecture with a 'CooFlux' 16-bit DSP [5]. The 32-bit ARM Cortex-M3 and M4 are comparable microarchitectures both having a three-stage pipeline and implementing the Thumb-2 instruction set with some differences in the number of instructions available. For example, additional 16/32-bit MAC instructions and single-precision FPU are only available on the Cortex M4.
The STM32 Nucleo-144 development board with the STM32L4RSZI MCU is used as a comparison point; the main difference between this STM device and the other three is the power optimization method. The core supply voltage of 1 V for the STM device is significantly higher than the core voltage for the near-threshold devices of Ambiq and Eta Compute at only around 0.5 V. Theoretically, the sub-threshold core supply voltage can be as low as 0.3 V which should be more power-efficient. However, at 0.3 V, the transistor switching time will be longer, which leads to a higher leakage current. The leakage can exceed 50% of the total power consumption for a threshold voltage level of around 0.2 V [32]. Therefore, in practice, choosing near-threshold voltage points instead of sub-threshold voltage points has been shown to be a more energy-efficient solution [32]. In order to optimize the energy usage based on the task requirements, STM32L4 uses standard dynamic voltage and frequency scaling (DVFS) with predefined pair sets of voltage and frequency, while the devices from Ambiq and Eta Compute apply adaptive voltage scaling (AVS) which is able to determine the voltage at a given frequency to handle the tasks at run-time using a feedback loop [33].
Comparing the datasheets, the STM32L4 has the highest clock frequency which results in an advantage in processing speed. Ambiq and Eta Compute's near-threshold devices only require about half of the core supply voltage of STM32L4. All considered processors are
equipped with limited flash sizes from 0.5 MB to 1 MB and a size of around 300 KB SRAM. That means that the neural network model deployed must be small enough to fit within the limited memory size. Therefore, we use the _TensorFlow_ framework and _TensorFlow_ _Lite_ converter to create a simple pre-trained CNN model designed for human activity recognition (HAR) from UCI [34] (as shown in Figure 1) to perform the initial energy evaluation of the four MCU devices.
The energy board X-NUCLEO-LPM01A from STMicroelectronics is used to evaluate the performance and energy consumption measuring the current used by the target board under a given supply voltage of 3.3 V (lower core voltages are regulated internally in the device). The power consumption of the four tested boards is shown in Figure 2. STM32L4 operates at a much higher power level which is around six times that of the near-threshold processors. The near-threshold processors Apollo2, Apollo3 and ECM offer significantly lower power, consuming less than 5 mW at the normal frequency of 48MHz and around 10 mW in the burst mode of 96 MHz. The reason why SparkFun Edge (Apollo3) consumes more power than Apollo2 is that the Apollo3 core is highly integrated into the SparkFun Edge board with peripheral sensors and ports which cannot be disabled during the power evaluation. Therefore, the peripheral devices on SparkFun Edge (Apollo3) are responsible for a component of the power consumption, which leads to a higher power than Apollo2 at each frequency level. Apollo2 and ECM3532 share a similar level of power consumption at 24 and 48 MHz. Apollo2 does not support running at a frequency higher than 48 MHz; therefore, there is no value for Apollo2 at the 96 MHz frequency point.
Figure 3 shows the execution time of the four tested processors for one inference of the pre-trained CNN model in Figure 1. Apollo2 is the slowest one and finishes inference using the longest amount of time at above 100 ms at 24 MHz frequency and around 50 ms at 48 MHz. The SparkFun Edge board (Apollo3) reduces the execution time by approximately 40% compared to Apollo2. It can even drop below 20 ms when operating in burst mode (96 MHz). STM32L4 is the second fastest among all devices due to its higher core supply voltage in Table 1 which enables faster transistor switching and processing speed. ECM3532 has the lowest execution times which are 28 ms at 24 MHz, 15 ms at 48 MHz and 8 ms at 96 MHz. The _TENSAIFlow_ compiler is responsible for significant optimization in the ECM3532 device.
Figure 4 indicates the energy consumption values observed using the X-NUCLEO-LPM01A energy measurement board. Since the power consumption of the standard MCU STM32L4 in Figure 2 is six times higher compared to the near-threshold MCUs and there is no obvious advantage in processing speed at the same frequency, STM32L4 is the worst device in terms of energy consumption for all operating frequencies from 24 to 96 MHz. SparkFun Edge (Apollo3) is slightly higher than Apollo2 at 24 and 48MHz due to the energy consumed by the peripheral equipment on board. ECM3532 achieves the minimum energy consumption at normal frequency points (24 and 48 MHz) in the energy test because it has better results in both power and time evaluations. However, when operating in the 96 MHz burst mode, ECM3532 requires more power to obtain a higher processing speed, resulting in a slight increase in energy consumption, and the same situation can be seen for the SparkFun Edge board.
Overall, compared to the STM32L4 reference point all three near-threshold MCUs have a significant advantage in power and energy consumption which is around 80% to 85% lower. Although the near-threshold MCUs are comparable with the standard MCU STM32L4 in terms of inference time, their lower core voltage supplies (Table 1) result in lower power (Figure 2) at the same frequency level. Therefore, in our model inference evaluation, the near-threshold MCU devices can achieve better results in energy consumption compared to STM32L4 at 24, 48 and 96 MHz. Thanks to the additional model optimization obtained with the _TENSAIFlow_ compiler provided by Eta Compute, ECM3532 offers a good balance between performance and energy efficiency to reach a lower execution time, enabling the lowest energy consumption for model inference from 24 to 96 MHz. In contrast, Apollo2, with a relatively slow processing speed, needs more time for model inference, which leads to higher values in energy consumption at 24 and 48 MHz. Due to the energy consumed by the inaccessible peripheral equipment on SparkFun Edge (Apollo3), this device consumes higher energy than Apollo2 (Figure 4).
Figure 3: MCU initial evaluation in terms of time cost.
Figure 2: MCU initial evaluation in terms of power consumption.
## 4 Adaptive Neural Network Methodology
To create the adaptive neural network system, we employ Python version 3.6.8 and _TensorFlow_ 1.15 with its dependencies installed on a desktop PC with Intel(R) Core (TM) i7-10850H CPU 2.70 GHz, NVIDIA GeForce MX250 GPU, and 16 GB RAM. There are several framework alternatives to train the neural networks, such as PyTorch and Caffe. Due to the reasons of MCU compatibility and stability, our approach uses _TensorFlow_ 1.15 to train the primary and secondary network models. After that, we use _TensorFlow Lite_ and _NNoM_ Converter to convert the models using single-precision floating-points (FP32) to the unsigned integer 8-bit (UINT8) format which can be deployed on the MCUs.
We consider human activity recognition using the UCI data set [34] as our raw data set. This application is a demonstrator which assumes that the activity will remain constant for a short period of time before being replaced by the next activity. To save energy via a reduction in execution time, we propose the adaptive neural network system which is able to disable the primary model and activate a secondary model when the activity remains unchanged. Therefore, we aim at achieving both latency and energy reductions without affecting prediction accuracy.
The UCI-HAR data set uses a body accelerometer, body gyroscope, and total accelerometer with three axes to provide body information for six actions (SITTING, STANDING, LAYING, WALKING, WALKING_UPSTAIRS, and WALKING_DOWNSTAIRS) performed by a group of 30 volunteers. All the data have been sampled in fixed-width sliding windows of 128 sampling points and they have been randomly partitioned into two sets, with 70% of data samples used for training and 30% used for testing. Therefore, we have a training data shape of (7352, 128, 3, 3), and a testing data shape of (2947, 128, 3, 3). We have evaluated the accuracy as shown in Figure 5 by applying the test data from these three sensors to the secondary network. The total accelerometer sensor shows the best overall accuracy. Thus, this sensor is selected for the secondary network inference. The training and testing data sets from UCI-HAR use
Figure 4: MCU initial evaluation in terms of energy consumption.
floating-point values with a wide range so that before training the model, all the data have been rescaled to quantized integer values with a range of [127; 128; 11; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 213; 214; 215; 216; 217; 218; 219; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 282; 289; 281; 284; 285; 287; 289; 288; 289; 282; 286; 287; 288; 289; 289; 283; 285; 289; 287; 289; 288; 289; 2910; 289; 292; 293; 294; 295; 296; 297; 298; 299; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 309; 310; 311; 312; 313; 314; 315; 316; 317; 318; 319; 320; 321; 324; 325; 326; 327; 328; 329; 333; 340; 341; 342; 343; 344; 345; 346; 347; 348; 349; 350; 351; 352; 353; 354; 355; 356; 357; 358; 359; 359; 360; 361; 362; 363; 364; 365; 366; 367; 368; 369; 370; 371; 372; 373; 374; 375; 376; 377; 378; 379; 380; 381; 382; 383; 384; 385; 386; 387; 388; 388; 389; 390; 391; 392; 393; 394; 395; 396; 397; 398; 399; 40; 401; 402; 403; 404; 405; 406; 407; 408; 409; 411; 412; 413; 414; 415; 416; 417; 418; 419; 420; 421; 422; 423; 424; 425; 426; 427; 428; 429; 431; 432; 433; 444; 445; 435; 436; 437; 444; 455; 456; 457; 458; 459; 460; 471; 472; 473; 474; 475; 476; 477; 478; 479; 48; 48; 491; 492; 434; 445; 436; 437; 493; 494; 405; 406; 407; 409; 410; 411; 411; 412; 413; 414; 415; 416; 417; 418; 419; 42; 433; 44; 44; 45; 461; 419; 420; 421; 42; 42; 42; 42; 42; 42; 42; 42; 42; 42; 423; 43; 444; 45; 462; 47; 48; 493; 48; 494; 50; 510; 521; 53; 540; 541; 54; 55; 56; 57; 58; 59; 511; 542; 55; 57; 58; 59; 50; 59; 512; 50; 513; 514; 515; 52; 59; 516; 517; 518; 519; 520; 519; 51; 521; 53; 54; 55; 56; 57; 58; 59; 522; 58; 59; 50; 51; 50; 51; 522; 59; 52; 51; 54; 52; 53; 54; 55; 57; 59; 50; 51; 523; 54; 55; 58; 59; 510; 524; 59; 50; 525; 51; 56; 57; 59; 526; 58; 59; 50; 527; 59; 50; 511; 50; 527; 51; 58; 51; 59; 528; 51; 51; 59; 50; 529; 52; 510; 53; 51; 52; 52; 53; 54; 56; 57; 58; 59; 512; 59; 50; 52; 511; 54; 57; 59; 50; 53; 52; 54; 58; 51; 51; 52; 51; 54; 59; 52; 50; 53; 52; 54; 57; 56; 58; 59; 52; 52; 59; 53; 57; 58; 59; 50; 59; 510; 51; 52; 51; 52; 52; 53; 54; 59; 50; 51; 52; 53; 54; 56; 57; 58; 59; 51; 59; 50; 511; 50; 52; 53; 59; 54; 58; 59; 52; 59; 54; 59; 50; 53; 54; 57; 59; 51; 56; 58; 59; 52; 59; 54; 59; 50; 51; 57; 58; 59; 50; 51; 59; 52; 51; 53; 59; 56; 51; 57; 59; 58; 510; 59; 50; 51; 59; 50; 52; 51; 50; 53; 51; 54; 59; 52; 52; 53; 54; 57; 58; 59; 53; 56; 54; 59; 55; 56; 57; 58; 59; 57; 59; 58; 59; 59; 510; 59; 50; 53; 59; 511; 52; 54; 59; 56; 57; 59; 58; 59; 50; 59; 51; 59; 50; 51; 51; 52; 51; 53; 51; 55; 59; 57; 51
from the UCI-HAR data set to classify six activities, the 'big' network has three inputs, resulting in around 9000 parameters in total. Convolutional 1D layers and max-pooling layers from Keras are stacked together to form the three 'big' branches in Figure 7. Then, the outputs from these branches are converged by a concatenate layer followed by a dense layer that has six neurons for six categories. The data shape of each sensor is (7352, 128, 3) which means we have 7352 data samples with a length of 128 for each axis. The data set is labelled from 0 to 5 to represent each activity for the training and testing processes in the 'big' network.
Each 'little' network only classifies two categories by using several convolutional 1D layers and max-pooling layers with 184 parameters in total. Therefore, based on the results in Figure 5, only the total accelerometer sensor which achieves the best overall accuracy is selected as the input for the 'little' network. The output of the 'little' network is a dense layer with two neurons for two categories as seen in Table 2. Due to the limited size of the UCI-HAR data set [34], we have less than 2000 data elements for each activity category. Therefore, we use all of them and convert the data labels from six categories to two for training the 'little' model. Particularly, for each 'little' model, the labels of corresponding activity are set to number 1, while the others are set to number 0. Finally, we can generate the models in the Keras format.
Figure 6: The processing steps (**left**) and the flow chart (**right**) for the ‘big’ + six ‘little’ configuration of the adaptive neural network system. In the left figure, dark blue and brown represent two ‘little’ network models corresponding to the input activities. In the right figure, the dotted line means only one ‘little’ network model of six is invoked at a time.
Figure 7: ‘Big’ (left) and ‘little’ (right) model structures in Keras.
### 'Big' + 'Dual' Configuration
The 'big' + 'dual' configuration is an alternative method of the adaptive neural network system. We replace the six 'little' models with one small neural network called 'dual'. Compared to the 'big' + six 'little' model, this one only consists of one primary and one secondary network model instead of one + six networks. In order to replace six 'little' networks designed for six categories with only one 'dual' network, the data sample for the previous activity is required to be stored in a register and compared with the current activity data sample as shown in Figure 8. Then, the 'dual' network can recognize these patterns to distinguish whether the current activity changes or not. For example, the first activity is classified as STANDING by the 'big' network, and the second activity of SITTING is compared with the one previously stored by the 'dual' network. If the 'dual' network detects these two activities are not the same, the 'big' network will be triggered for further inferences. Otherwise, the 'dual' network keeps active for time and energy saving as shown in Figure 8.
\begin{table}
\begin{tabular}{l c c c c} \hline & **Model: ‘Big’** & & & **Model: ‘Little’** & \\ \hline
**Layer (Type)** & **Output Shape** & **Param\#** & **Layer (Type)** & **Output Shape** & **Param\#** \\ \hline model\_input1 & [(None, 128, 3)] & 0 & model\_input & [[None, 128, 3)] & 0 \\ model\_input2 & [(None, 128, 3)] & 0 & conv1d & (None, 128, 4) & 40 \\ model\_input3 & [(None, 128, 3)] & 0 & conv1d\_1 & (None, 64, 4) & 52 \\ conv1d & (None, 128, 4) & 40 & conv1d\_2 & (None, 32, 2) & 26 \\ conv1d\_5 & (None, 128, 4) & 40 & model\_output & (None, 2) & 66 \\ conv1d\_10 & (None, 128, 4) & 40 & & & \\ conv1d\_1 & (None, 64, 8) & 104 & & & \\ conv1d\_6 & (None, 64, 8) & 104 & & & \\ conv1d\_11 & (None, 64, 8) & 104 & & & \\ conv1d\_2 & (None, 32, 16) & 400 & & & \\ conv1d\_7 & (None, 32, 16) & 400 & & & \\ conv1d\_12 & (None, 32, 16) & 400 & & & \\ conv1d\_3 & (None, 16, 32) & 1568 & & & \\ conv1d\_8 & (None, 16, 32) & 1568 & & & \\ conv1d\_13 & (None, 16, 32) & 1568 & & & \\ conv1d\_4 & (None, 8, 8) & 776 & & & \\ conv1d\_9 & (None, 8, 8) & 776 & & & \\ conv1d\_14 & (None, 8, 8) & 776 & & & \\ concatenate & (None, 96) & 0 & & & \\ model\_output & (None, 6) & 582 & & & \\ \hline \multicolumn{5}{c}{Total params: 9246} & Total params: 184} \\ \hline \end{tabular}
\end{table}
Table 2: ‘Big’ (**left**) and ‘little’ (**right**) model parameter details. The pooling layers are hidden. For more info, see Figure 7.
The 'big' network is the same as the one introduced in the previous configuration, while the secondary 'dual' network has been reconstructed as shown in Figure 9 and Table 3. In the same way as for the 'little' network, the single input data from the total accelerometer sensor are selected for the 'dual' network. Therefore, the input data shape of the 'dual' network becomes (1, 128, 3, 2), which contains two adjacent input data samples. As there is a significant increase in the input data shape, the number of parameters increases from 184 in the 'little' network to 300 in the 'dual' network.
Figure 8: The processing steps (**left**) and the flow chart (**right**) for the ‘big’ + ‘dual’ configuration of the adaptive neural network system. In the left figure, the two input data blocks represent a pair of adjacent data samples required by the ‘dual’ network. In the right figure, registers store the previous data and label for the current process in the ‘dual’ network.
Figure 9: ‘Dual’ model structures in Keras.
### 'Big' + Distance Configuration
Finally, we consider whether the wake-up module in the adaptive system can be replaced by a simpler algorithm instead of using neural networks such as 'little' and 'dual' networks. This configuration, which is similar to the second configuration, replaces the 'dual' network model with a distance calculator measuring the difference in the distance between two adjacent input samples. In order to pick up on an activity change, a distance calculator using Minkowski distance and Mahalanobis distance is applied to trigger the 'big' network when the difference in distance reaches a pre-set threshold value as shown in Figure 10.
\[D(x,y)=\left(\sum_{i=1}^{n}|x_{i}-y_{i}|^{p}\right)^{1/p} \tag{1}\]
The Euclidean distance is a typical metric that measures the real distance of two points in N-dimensions. As shown in Equation (1), Minkowski distance is a generalized format of Euclidean distance. When \(p=2\), it becomes equivalent to the Euclidean distance, while it becomes equivalent to the Manhattan distance when \(p=1\). Moreover, the Mahalanobis distance measures the distance of a target point P and a mean point of a distribution D. This distance increases if point P moves away along each principal component axis of D. The Mahalanobis distance becomes Euclidean distance when these axes are scaled to have a unit variance [35,36].
The input data shape which is (1, 128, 3, 2) for the 'dual' network should be stretched into (1, 384, 2) where the value two means that two adjacent data samples are required by the distance calculator. The calculator then measures the Minkowski distance between these two
Figure 10: The processing steps (**left**) and the flow chart (**right**) for the ‘big’ + distance configuration of the adaptive neural network system. In the left figure, the two input data blocks represent a pair of adjacent data samples required by the distance calculator. In the right figure, the registers store the previous data and label for the current process in the distance calculator.
\begin{table}
\begin{tabular}{c c c} \hline \hline & **Model: ‘Dual’** & \\ \hline
**Layer (Type)** & **Output Shape** & **Param\#** \\ \hline model\_input & [(None, 384, 2)] & 0 \\ conv1d & (None, 384, 4) & 28 \\ conv1d\_1 & (None, 192, 4) & 52 \\ conv1d\_2 & (None, 96, 2) & 26 \\ model\_output & (None, 2) & 194 \\ \hline \hline \end{tabular} Total params: 300
\end{table}
Table 3: ‘Dual’ model parameter details. The pooling layers are hidden. For more info, see Figure 9.
adjacent data samples following Equation (1) for both cases of \(p=1\) and \(p=2\). Mahalanobis distance requires the covariance matrix of the data set before the calculation. To wake up the 'big' model, multiple thresholds can be selected to achieve multiple sensitivities. The 'big' model is only triggered when the distance between the previous data sample and the current one is beyond the pre-set threshold. Therefore, a lower threshold value will reach a higher inference accuracy because the 'big' network will be invoked more frequently. Conversely, a higher threshold value means that the 'big' network is invoked fewer times, leading to a shorter inference time.
## 5 Neural Network Microcontroller Deployment
The neural network models in the Keras format are quantized to the UINT8 format to reduce the amount of memory needed before MCU deployment. According to Equation (2) in [37], as shown below, the real value is the input value of the training process in the range of [-128, 127], while the quantized value is the target value after the quantization, which is in the UINT8 range of [0, 255]. The mean and the standard deviation values can be calculated as 128 and 1, respectively. Finally, the model in a quantized format is obtained.
\[real\_value=(quantized\_value-mean\_value)/std\_dev\_value \tag{2}\]
We use the available data samples from UCI-HAR [34] instead of real-time data to perform a fair comparison across the different platforms. Thus, when the MCU runs the application, stored data and network models can be accessed correctly. Moreover, the model-switching algorithm for the adaptive system introduced in Section 4 is achieved at the C code level instead of the network model layer level. The 'big' and 'little' models are capable of being invoked independently, which means the adaptive system is more flexible and effective at finding the balance between performance and energy consumption. Finally, before flashing the target boards, the application must be compiled to an executable binary using cross-compilation tools for GCC [38], ARM Compiler [39] and the _TENSAIFlow_ compiler from Eta Compute [29]. The model deployment process is shown in Figure 11 and 12.
### Stm3214k5z1
_STM32Cube.AI_ from STMicroelectronics [28] is a framework designed to optimize STM devices such as STM32L4. However, due to the limitation of being a proprietary environment, the switching algorithm between primary and secondary networks cannot be deployed at the C code level. On the other hand [26], it has been designed with a focus on general-purpose and flexible deployment on different MCU boards. The _NNoM_ converter is able to convert the pre-trained model in the Keras format to the C code and its neural network library can be used to deploy the model. Therefore, the _NNoM_ framework is selected for model deployment on STM32L4 instead of _STM32Cube.AI_ (see Figure 12).
The STM32Cube SDK version 1.17.0 from STMicroelectronics which contains utility tools and example projects, is required to drive the STM32L4R5ZI MCU board. Keil uVision IDE from ARM is chosen to set up a coding environment to support STM32L4. The driver pack for STM32L4 is required to be installed by the pack installer of Keil. The STM32L4 CN1 port is connected with a desktop PC by using a micro-USB cable. Then, the ST-Link debugger can be selected under the target debug page and the STM32L4 device can then be connected and detected by the PC. Alternatively, if the connection is unsuccessful, STM32 ST-LINK Utility from STMicroelectronics can erase the board to avoid software conflicts.
After the NN models are trained by Keras (_TensorFlow_v1.15_), they are required to be quantized by applying the _NNoM_ converter command as shown in Listing 1. Then, the header file containing model weights can be generated by using the function below. Before building the project, the weight header file, input data file and the files from the _NNoM_ library should be added by Keil Manage Project Items. Finally, the steps of building and flashing the project to the development board can be carried out. To observe the output from the debug viewer, the core clock under the trace page of the target debug setting should match the operating clock of the device.
```
generate_model(model, x_test, name='weight.h')
```
Figure 11: The map of the steps of neural network model deployment on target MCU boards. Black frames represent trained models, brown frames represent the library source codes used, while red ones represent MCU boards.
Figure 12: Comparison of the software used in each deployment phase for different MCUs.
### Apollo2 Blue
AmbiqSuite SDK version 2.2.0 from Ambiq supports the model deployment on Apollo2 Blue. Keil uVision IDE from ARM is used to set up a coding environment. After installing the driver pack for Apollo2, the Apollo2 board is connected to the PC by using a micro-USB cable and selecting J-Link under the target debug page of Keil as shown in Figure 11. Similar to the case of STM32L4, Apollo2 is not supported by _TensorFlow Lite_ and _TENSAIFlow_. Thus, the pre-trained models in Keras format are converted into a quantized format using the _NNoM_ converter as shown in Listing 1. Then, the model weights and data header files and the _NNoM_ library should be added into the project by Keil Manage Project Items. After building and flashing the project to the target board, the Keil debug viewer can be used to observe the model outputs.
### SparkFun Edge (Apollo3 Blue)
_TensorFlow_ from Google is not only capable of training neural network models, but also includes _TensorFlow Lite_ to deploy network models on edge devices such as MCUs [27]. The trained network model saved in the Keras format can be converted into the quantized format using the _TensorFlow Lite_ converter in Listing 2 and 3. The library source code in C and board SDK files are provided to support the model deployment on MCUs (see Figure 11). We use _TensorFlow Lite_ to support model deployment for the MCU development board of SparkFun Edge (Apollo3).
AmbiqSuite SDK version 2.2.0 contains utility tools and drivers from Ambiq to support SparkFun Edge (Apollo3 Blue). _TensorFlow Lite_ version 1.15 is used to convert Keras models using floating-point parameters into the TFLite model with UINT8 parameters. As per the corresponding command lines in Listing 2 and 3, the quantized model files are generated and ready to be deployed. The TFLite model is converted into a hexadecimal file which can be read by the _TensorFlow Lite_ library by using hex dump command 'xxd'. Finally, we connect Apollo3 to the PC with a micro-USB cable and flash the binary file to the target board using the flash utility provided by the AmbiqSuite SDK.
```
#file_convert \ --keras_model_file=_/Output_Models/$MODELNAME].h5 \ --output_file=_/Output_Models/$MODELNAME].tflite \ --inference_type=QUANTIZED_UINT8 \ --input_shapes=1,128,3:1,128,3:1,128,3 \ --input_arrays=model_input1,model_input2,model_input3 \ --output_arrays=model_output/BiasAdd \ --default_ranges_min=0 --default_ranges_max=255 \ --mean_values=128,128,128 --std_dev_values=1,1,1 \ --change_concat_input_ranges=false \ --allow_nudging_weights_to_use_fast_gemm_kernel=true \ --allow_custom_ops
```
**Listing 3** TensorFlow Lite converter command lines for 'little' model quantization.
``` #file_convert \ --keras_model_file=_/Output_Models/$MODELNAME].h5 \ --output_file=_/Output_Models/$MODELNAME].tflite \ --inference_type=QUANTIZED_UINT8 \ --input_shapes=1,128,3 \ --input_arrays=model_input_outputs=model_input_outputs=model_input_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputsoutputs=model_outputs=model_outputs=model_outputs=model_outputs=model_outputsoutputs=model_outputs=model_outputsoutputs=model_outputsoutputs=model_outputsoutputs=model_outputsoutputs=model_outputsoutputs=model_outputsoutputs=model_outputsoutputs=model_outputsoutputs=model_outputsoutputs=model_outputsoutputs=model_outputsoutputs=model_outputsoutputs=model_outputsoutputs=model_outputsoutputs_outputs=model_outputsoutputs=model_outputsoutputsoutputs_outputs=model_outputsoutputs=model_outputsoutputs_outputs=model_outputsoutputs_outputs=model_outputsoutputsoutputs_outputs=model_outputsoutputsoutputs_outputs=model_outputsoutputs_outputs=model_outputsoutputs_outputs=outputsoutputs_outputsoutputs=model_outputsoutputs_outputs=outputsoutputs_outputsoutputs_outputs=outputs_outputsoutputsoutputs_outputs=outputsoutputs_outputsoutputsoutputs_outputs=outputsoutputs_outputsoutputsoutputs_outputs=outputsoutputs_outputsoutputsoutputs_outputs=outputsoutputs_outputsoutputsoutputs_outputs=outputsoutputs_outputsoutputsoutputs_outputs=outputsoutputs_outputsoutputsoutputs_outputs=outputsoutputs_outputsoutputsoutputs_outputsoutputs=outputsoutputs_outputsoutputsoutputs_outputsoutputs=outputsoutputs_outputsoutputsoutputs_outputs=outputsoutputs_outputsoutputsoutputs_outputsoutputs=outputsoutputs_outputsoutputsoutputs_outputsoutputs=outputsoutputs_outputsoutputsoutputs_outputsoutputs=outputsoutputs_outputsoutputsoutputs_outputsoutputs=outputsoutputs_outputsoutputsoutputs_outputsoutputs=outputsoutputs_outputsoutputsoutputs_outputsoutputs=outputsoutputs_outputsoutputsoutputsoutputs_outputsoutputsoutputs=outputs_outputsoutputsoutputsoutputs_outputsoutputsoutputs=outputsoutputs_outputsoutputsoutputsoutputs_outputsoutputsoutputs=outputsoutputs_outputsoutputsoutputsoutputsoutputsoutputs_outputsoutputs=outputsoutputs_outputsoutputsoutputsoutputs_outputsoutputsoutputs=outputsoutputs_outputsoutputsoutputsoutputs_outputsoutputs=outputsoutputs_outputsoutputsoutputsoutputsoutputs_outputsoutputsoutputsoutputs=outputsoutputs_outputsoutputsoutputsoutputsoutputsoutputs_outputsoutputsoutputsoutputs=outputsoutputs_outputsoutputsoutputsoutputsoutputs_outputsoutputsoutputsoutputs=outputsoutputs_outputsoutputsoutputsoutputsoutputsoutputsoutputs_outputsoutputsoutputsoutputs=outputs_outputsoutputsoutputsoutputsoutputs_outputsoutputsoutputsoutputsoutputs=outputs_outputsoutputsoutputsoutputsoutputsoutputs_outputsoutputsoutputsoutputsoutputs=outputs_outputsoutputsoutputsoutputsoutputsoutputs_outputsoutputsoutputsoutputs=outputs_outputsoutputsoutputsoutputsoutputsoutputsoutputs_outputsoutputsoutputsoutputs=outputs_outputsoutputsoutputsoutputsoutputsoutputs_outputsoutputsoutputsoutputsoutputs=outputs_outputsoutputsoutputsoutputsoutputsoutputs_outputsoutputsoutputsoutputsoutputs=outputs_outputsoutputsoutputsoutputsoutputsoutputs_outputsoutputsoutputsoutputsoutputs=outputs_outputsoutputsoutputsoutputsoutputs_outputsoutputsoutputsoutputsoutputs_outputsoutputsoutputsoutputs_outputsoutputsoutputsoutputs_outputsoutputsoutputsoutputs_outputsoutputsoutputs_outputsoutputsoutputs_outputsoutputsoutputs_outputsoutputsoutputs_outputsoutputs_outputsoutputsoutputs_outputsoutputs_outputsoutputs_outputsoutputs_outputsoutputs_outputsoutputs_outputsoutputs_outputsoutputs_outputs_outputsoutputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs_outputs__outputs_outputs__outputs_outputs_outputs__outputs_outputs__outputs_outputs__outputs__outputs_outputs_outputs_outputs__outputs_outputs__outputs__outputs__outputs__outputs_outputs__outputs__outputs___outputs_outputs__outputs__outputs__outputs__outputs__outputs__outputs__outputs___outputs__outputs__outputs___outputs__outputs____outputs___outputs___outputs___outputs___outputs___outputs____outputs___outputs____outputs___outputs___outputs___outputs____outputs____outputs____outputs____outputs___outputs____outputs____outputs_____outputs_____outputs____outputs_____outputs___outputs____outputs____outputs____outputs_____outputs____outputs____outputs_____outputs_____outputs_____outputs____outputs_____outputs____outputs_____outputs______outputs_____outputs____outputs____outputs___outputs____outputs___outputs____outputs____outputs___outputs____outputs_____outputs____outputs___outputs____outputs___outputs___outputs____outputs____outputs____outputs___outputs____outputs____outputs____outputs___outputs___outputs___outputs____outputs____outputs___outputs_____outputs___outputs____outputs____outputs___outputs_____outputs____outputs____outputs____outputs_____outputs____outputs_____outputs_______outputs______outputs_____outputs____outputs_____outputs____outputs______outputs_____outputs_____outputs____outputs______outputs_______outputs____outputs______outputs_____outputs_____outputs______outputs_____outputs______outputs_____outputs____
--output_arrays=model_output/BiasAdd \ --default_ranges_min=0 --default_ranges_max=255 \ --mean_values=128 --std_dev_values=1 \ --change_concat_input_ranges=false \ --allow_nudging_weights_to_use_fast_gemm_kernel=true \ --allow_custom_ops
### Ecm3532
_TENSAIFlow_ from Eta Compute is a framework designed to deploy pre-trained network models for Eta products such as ECM3531 and ECM3532 [29]. It is highly optimized for Eta Compute products to achieve the best balance between performance and efficiency. This framework is not capable of training neural network models such as _TensorFlow_; it only provides the model conversion and deployment after training. After the pre-trained model is converted into a quantized TFLite format by _TensorFlow_ _Lite_, _TENSAIFlow_ converts the TFLite model to the C code which can be invoked with the library source code. The ECM3532 development board is not supported by _NNoM_ or _TensorFlow_ _Lite_. Therefore, _TENSAIFlow_ SDK version 2.0.2 contains the _TENSAIFlow_ converter and neural network library from Eta Compute required to support model deployment on ECM3532. As shown in Figure 11, the pre-trained model from Keras (_TensorFlow_v1.15_) is quantized to the UINT8 format by using the _TensorFlow_ _Lite_ converter first (Listing 2 and 3) and converted to the readable format for the _TENSAIFlow_ library by using the _TENSAIFlow_ converter (Listing 4). Then, we build the project and flash it to the target ECM3532.
```
//tensiflow_compile \ --tflite_file_./model_zoo/$MODELNAME1.tflite \ --out_path_./..././Applications/${PROJECTNAME}/src/ \ --weights_path_./.././Applications/${PROJECTNAME}/include/
```
Listing 4TENSAIFlow converter command lines for model conversion.
## 6 Results and Discussion
The accuracy of the different configurations and the original using the full HAR test data set is shown in Figure 13. We do not consider different random initialization seeds in this work but we use the same trained network for the different MCUs to perform a fair comparison. We also choose the learning rate carefully, using a relatively slow rate and SGDR to prevent the model from sinking into a locally optimal point instead of the global one. We use holdout cross validation to divide the whole data set: 70% for the training data set, 15% for the validation data set and 15% the testing data set.
In Figure 13, the 'big'-only configuration has 91.3% accuracy but the model has a large invocation count that will result in significant latency. The 'big' + six 'little' configuration reaches a comparable level of accuracy and the number of times the 'big' model invoked is reduced from 2947 to 406, reducing the inference time of the 'big' model by two-thirds. The 'big' + 'dual' configuration cannot reach a similar accuracy due to the low accuracy of the secondary 'dual' network. The 'big' + distance configuration achieves a relatively low testing accuracy and it invokes the 'big' network 669 times in 2947 data samples.
In order to establish the same testing environments and provide the same test samples for all MCU boards, we choose to apply the data samples from the UCI-HAR test data set rather than a real-time signal from the board sensors. Only 60 data samples from the UCI-HAR test data set can be selected to fit in the MCU boards together with the model and switching algorithm due to memory limitations. Therefore, we select ten data samples for each activity and compose them into a certain sequence of activity I to VI. This means that there are five activity changes in the test data sequence. We have verified that the classification results obtained in these 60 samples are equivalent to the ones obtained with the whole data set, although there are some negligible differences between devices due to the different toolchains. The following evaluations are performed under a working frequency of 48 MHz without debugging. Four configurations of the adaptive neural network are evaluated below:
### Big' Only
As shown in Figure 13, after removing the LSTM layers used in [23], we still maintain an accuracy level of around 90% for the 'big' network on the activity classification task compared to the results in [22,23]. The original 'big' model method performs 2947 inferences on all test data samples. Due to the large topology of the 'big' model and a large number of inferences, the 'big'-only configuration has the highest execution time. This can be seen in Figure 14: for all four MCUs working at the same operating frequency of 48 MHz, the latency of the 'big'-only model is the highest among all four configurations. The power consumption values for each configuration show negligible variations for each MCU in Figure 15. Therefore, the energy consumption for each configuration is only affected by the inference time. As shown in Figure 15, the 'big'-only configuration consumes the highest value of energy.
Figure 13: The accuracy and the ‘big’ inference counts for four configurations (quantized TFLite format) on the test data set (2947 samples) have been evaluated on a PC.
Figure 14: The time evaluation of four adaptive configurations on MCU boards with the ‘big’ inference counts. A total of 60 data samples extracted from the UCI-HAR test data set are tested to form the evaluation.
### 'Big' + Six 'Title'
The difference between the inference time of the 'big'-only and 'big' + six 'little' configurations is shown in Figure 14. The inference latency of the 'big' model is around 12 times longer than the latency of the 'little' model. Therefore, the lower the number of times the 'big' network gets invoked, the higher the efficiency of the system is. In this configuration, six 'little' models are applied to save time by restricting the 'big' inference count to around ten times. In all MCU evaluations in Figure 14, the time result of the 'big' + six 'little' configuration is the lowest and this reduces the execution time by around 80% compared to the original 'big'-only configuration and around 50% compared to the others. For all four configurations, the power is largely equivalent, as can be seen in Figure 15. Due to the significant advantage of the 'big' + six 'little' configuration in terms of execution time, this configuration achieves energy savings of around 80% compared to the original 'big' method on all MCUs.
### 'Big' + 'Dual'
In contrast to the 'big' + six 'little' configuration, the 'big' + 'dual' configuration is not restricted by the number of categories that need to be classified. The number of 'little' networks in the previous configuration is determined by the number of categories, which leads to difficulties in model deployment if the number of categories is large such as in the CIFAR-100 data set. By applying a network focusing on detecting activity changes, the 'big' + 'dual' configuration can pick up activity changes by comparing the current activity and the previous activity. However, two deficiencies appear in this configuration. Firstly, in the 7352 training data samples, there are only 280 cases of activity switching. We extract 280 data samples with an 'activity change' label and 7072 samples with an 'activity continua
Figure 15: The power and energy evaluation of four adaptive configurations on MCU boards. A total of 60 data samples extracted from the UCI-HAR test data set are tested to form the evaluation.
the 'dual' model, resulting in an unbalanced training data set. Secondly, there is an error propagation problem which occurs when 'dual' classification is incorrect in the case of 'activity change'. For example, in Figure 16, the 'dual' model has an error at the seventh data sample where the activity switches from I to III, skipping 'big' inference and misleading the adaptive system to output activity I. After that, the 'dual' model has no errors for the rest of the data, detecting no activity changes. This adaptive system continues to propagate the output errors because the seventh output is set up as activity I instead of III. Compared to the 'big' + six 'little' configuration, the 'big' model is also skipped at the seventh data because the 'little' model does not pick up any changes (an error). However, after the next data input, the 'little' model is able to recognize that the activity is not activity I anymore. Then, the 'big' model is invoked to output the correct activity label and the system recovers to a correct state.
Although the 'dual' model is able to solve the large category issue, it is not sufficiently trained due to the unbalanced training data set. Due to the poor accuracy of the 'dual' model, the error propagation mentioned in Figure 16 occurs and fails to switch on the 'big' model when detecting an activity change for further inference. This results in a minimal 'big' inference count but a relatively poor performance in terms of accuracy. Therefore, the overall accuracy of this adaptive system (around 60% for all test data on a PC) is lower compared to the other configurations as shown in Figure 13. Furthermore, because the complexity of the 'dual' model is relatively high and the 'dual' model is activated continuously, this leads to a higher complexity in the inference process. Additionally, the combination of previous and current data samples for the 'dual' input needs to be pre-processed. Therefore, despite having the fewest number of 'big' inference counts (Figure 13), the latency and energy consumption double compared to the best configuration of the 'big' + six 'little' models as shown in Figures 14 and 15.
### 'Big' + Distance
The 'big' + distance configuration, as shown in Figure 17, shows that the Manhattan distance and Euclidean distance have a poor performance when distinguishing activities I to III which are WALKING, WALKING_UPSTAIRS, and WALKING_DOWNSTAIRS. The distance between the data samples of the same activities exceeds the distance between the ones of different activities (see data 8 to 10 in Figure 17). Therefore, a clear threshold boundary cannot be set to separate the case of 'activity change' from unchanged activities due to these indistinguishable values.
Figure 16: The output comparison of two configurations when an error occurs at the moment of an ‘activity change’. Errors have been labelled in the color red. The results including the primary module, secondary module, and overall adaptive system have been shown below.
In the 'big' + distance configuration, a threshold point of 8000 for the Manhattan distance is selected for the evaluation in Figures 13 and 14. This threshold of 8000 triggers the 'big' model more frequently so it can be considered sensitive. As with the 'big' + 'dual' model, the 'big' + distance model also suffers from the error propagation issue which severely affects the overall accuracy. Compared to the 'big' + six 'little' configuration, this configuration achieves a relatively low accuracy level at around 76% with a higher number of 'big' invocation times as shown in Figure 13. Furthermore, this configuration has a significant latency and energy costs which doubles compared to 'big' + six 'little' models and it is similar to the 'big' + 'dual' configuration as shown in Figures 14 and 15.
Overall and across all MCUs, our best adaptive network configuration, the 'big' + six 'little' configuration, achieves a high prediction accuracy level of around 90%, which is comparable to the original 'big'-only method. As discussed in Section 3's initial evaluation of the MCU, ECM3532 achieves the highest processing speed, followed by STM32L4, SparkFun Edge (Apollo3) and Apollo2 (listed fastest to slowest). With the same configuration, the execution time in Figure 14 shows that this is consistent across all four MCU boards. For the 'big' + six 'little' configuration, the 'big' inference count is reduced by around 85% compared with the original method, achieving up to 5\(\times\) the acceleration on MCUs. Since the MCU boards are in working mode when running different configurations, the power consumption of these configurations is similar to the MCU shown in Figure 15. Due to the negligible differences between network configurations in terms of power, the distribution of the energy consumption of the configurations for each MCU follows the time cost distribution in Figure 14. As shown in Figure 15, across all devices, the 'big' + six 'little' algorithm configuration achieves energy savings of around 80% compared to the original 'big'-only method, and around 50% compared to the other two configurations. Furthermore, compared to a standard MCU running the 'big' network only, the best configuration, the 'big' + six 'little' model, coupled with the best state-of-the-art near-threshold hardware, can achieve a reduction in energy of up to 98% that will translate into a 62\(\times\) increase in the operating lifetime of an application for detecting battery-powered activity.
in significantly better energy and performance characteristics. The proposed algorithms can be successfully deployed on STM32L4R5ZI, Apollo2 Blue, SparkFun Edge (Apollo3 Blue) and ECM3532. The application UCI-HAR is representative of an activity recognition task that assumes that an activity will remain constant for some period of time before switching to a different activity. In order to save time and energy, we activate the secondary model with a faster inference speed to pause the primary model when the activity remains constant. The best adaptive network configuration, the 'big' + six 'little' configuration, has achieved a reduction in energy of 80% and a comparable level of prediction accuracy to the original method in the UCI-HAR test. The results prove that the proposed methods can deliver different levels of time-energy reduction and constant accuracy on all the devices we tested. Furthermore, coupled with near-threshold MCUs, the best configuration is able to increase battery life by up to 62x on UCI-HAR compared to the original non-adaptive method using a standard MCU.
Future work involves extending the work to other application areas such as machine health monitoring and anomaly detection. In addition, we plan to investigate how the approach can be scaled to applications with a large number of possible output categories without an explosion in the memory requirements by using additional network hierarchies. Finally, a future research direction includes developing a framework that is able to automatically extract optimal 'little' configurations from a 'big' configuration in terms of overall accuracy and energy in order to replace manual analysis.
Methodology, Z.S., N.H. and J.N.-Y.; software, Z.S. and J.N.-Y.; validation, Z.S.; resources, Z.S.; data curation, Z.S.; writing--original draft preparation, Z.S.; writing--review and editing, Z.S., N.H. and J.N.-Y.; visualization, Z.S.; supervision, J.N.-Y.; All authors have read and agreed to the published version of the manuscript.
This work was partially funded by the Royal Society INF/R2/192044 Machine Intelligence at the Network Edge (MINET) fellowship.
Not applicable
Not applicable
Our work can be found here: (accessed on 9 March 2022) [https://github.com/DarkSZChao/Big-Little_NN_Strategies](https://github.com/DarkSZChao/Big-Little_NN_Strategies).
The authors declare no conflicts of interest.
The following abbreviations are used in this manuscript:
\begin{tabular}{l l} MCU & Microcontroller Unit \\ LoT & Internet of Things \\ CNN & Convolutional Neural Network \\ UCI-HAR & UCI-Human Activity Recognition \\ \end{tabular}
|
2307.05299 | Discovering Symbolic Laws Directly from Trajectories with Hamiltonian
Graph Neural Networks | The time evolution of physical systems is described by differential
equations, which depend on abstract quantities like energy and force.
Traditionally, these quantities are derived as functionals based on observables
such as positions and velocities. Discovering these governing symbolic laws is
the key to comprehending the interactions in nature. Here, we present a
Hamiltonian graph neural network (HGNN), a physics-enforced GNN that learns the
dynamics of systems directly from their trajectory. We demonstrate the
performance of HGNN on n-springs, n-pendulums, gravitational systems, and
binary Lennard Jones systems; HGNN learns the dynamics in excellent agreement
with the ground truth from small amounts of data. We also evaluate the ability
of HGNN to generalize to larger system sizes, and to hybrid spring-pendulum
system that is a combination of two original systems (spring and pendulum) on
which the models are trained independently. Finally, employing symbolic
regression on the learned HGNN, we infer the underlying equations relating the
energy functionals, even for complex systems such as the binary Lennard-Jones
liquid. Our framework facilitates the interpretable discovery of interaction
laws directly from physical system trajectories. Furthermore, this approach can
be extended to other systems with topology-dependent dynamics, such as cells,
polydisperse gels, or deformable bodies. | Suresh Bishnoi, Ravinder Bhattoo, Jayadeva, Sayan Ranu, N M Anoop Krishnan | 2023-07-11T14:43:25Z | http://arxiv.org/abs/2307.05299v1 | # Discovering Symbolic Laws Directly from Trajectories with Hamiltonian Graph Neural Networks
###### Abstract
The time evolution of physical systems is described by differential equations, which depend on abstract quantities like energy and force. Traditionally, these quantities are derived as functionals based on observables such as positions and velocities. Discovering these governing symbolic laws is the key to comprehending the interactions in nature. Here, we present a Hamiltonian graph neural network (Hgnn), a physics-enforced Gnn that learns the dynamics of systems directly from their trajectory. We demonstrate the performance of Hgnn on \(n-\)springs, \(n-\)pendulums, gravitational systems, and binary Lennard Jones systems; Hgnn learns the dynamics in excellent agreement with the ground truth from small amounts of data. We also evaluate the ability of Hgnn to generalize to larger system sizes, and to hybrid spring-pendulum system that is a combination of two original systems (spring and pendulum) on which the models are trained independently. Finally, employing symbolic regression on the learned Hgnn, we infer the underlying equations relating the energy functionals, even for complex systems such as the binary Lennard-Jones liquid. Our framework facilitates the interpretable discovery of interaction laws directly from physical system trajectories. Furthermore, this approach can be extended to other systems with topology-dependent dynamics, such as cells, polydisperse gels, or deformable bodies.
Any system in the universe is always in a continuous state of motion. This motion, also known as the dynamics, is observed and noted in terms of the trajectory, which comprises the system's configuration (that is, positions and velocities) as a function of time. Any understanding humans have developed about the universe is through analyzing the dynamics of different systems. Traditionally, the dynamics governing a physical system are expressed as governing differential equations derived from fundamental laws such as energy or momentum conservation, which, when integrated,
provide the system's time evolution. However, these equations require the knowledge of functionals that relate abstract quantities such as energy, force, or stress with the configuration [1]. Thus, discovering these governing equations directly from the trajectory remains the key to understanding and comprehending the phenomena occurring in nature. Alternatively, several symbolic regression (SR) approaches have been used to discover free-form laws directly from observations [2, 3, 4]. However, the function space to explore in such cases is prohibitively large, and appropriate assumptions and constraints regarding the equations need to be provided to obtain a meaningful and straightforward equation [5, 6, 7].
Learning the dynamics of physical systems directly from their trajectory is a problem of interest in wide areas such as robotics, mechanics, biological systems such as proteins, and atomistic dynamics [8, 9, 10, 11, 12]. Recently, machine learning (ML) tools have been widely used to learn the dynamics of systems directly from the trajectory of systems [13, 14, 15, 16, 17, 18, 19]. Specifically, there have been three broad approaches to this extent, namely, data-driven, physics-informed, and physics-enforced approaches. Data-driven approaches try to develop models that learn the dynamics directly from ground-truth trajectories [13, 10, 12]. Physics-informed approaches rely on an additional term in the loss function, which is the governing differential equation: data loss and physics loss [9]. In contrast, physics-enforced approaches directly infuse the inductive biases in terms of the ordinary differential equations directly in the formulation as a hard constraint. These approaches are known as Hamiltonian (Hnn) [20, 21, 22, 14], and Lagrangian neural networks (Lnn) [15, 16, 17], and Graph Neural ODEs [23, 18, 24]. Adding the inductive bias in a physics-enforced fashion instead of a soft constraint in the loss function can significantly enhance the learning efficiency while also leading to realistic trajectories in terms of conservation laws [14, 22, 25]. Additionally, combining these formulations with graph neural networks (Gnns) [26, 27, 28, 25] can lead to superior properties such as zero-shot generalizability to unseen system sizes and hybrid systems unseen during the training, more efficient learning, and inference. However, although efficient in learning the dynamics, these approaches remain black-box in nature with poor interpretability of the learned function, which questions the robustness and correctness of the learned models [29].
Here, we present a framework combining Hamiltonian graph neural networks (Hgnn) and symbolic regression (SR), which enables the discovery of symbolic laws governing the energy functionals directly from the trajectory of systems. Specifically, we propose a Hgnn architecture that decouples kinetic and potential energies and, thereby, efficiently learns the Hamiltonian of a system directly from the trajectory. We evaluate our architecture on several complex systems such as \(n\)-pendulum, \(n\)-spring, \(n\)-particle gravitational, and binary LJ systems. Further, the modular nature of Hgnn enables the interpretability of the learned functions, which, when combined with SR, enables the discovery of the governing laws in a symbolic form, even for complex interactions such as binary LJ systems.
## Hamiltonian mechanics
Here, we briefly introduce the mathematical formulation of Hamiltonian mechanics that govern the dynamics of physical systems. Consider a system of \(n\) particles that are interacting with their positions at time \(t\) represented by the Cartesian coordinates as \(\mathbf{x}(t)=(\mathbf{x}_{1}(t),\mathbf{x}_{2}(t),...\mathbf{x}_{n}(t))\). The Hamiltonian \(H\) of the system is defined as \(H(\mathbf{p}_{\mathbf{x}},\mathbf{x})=T(\dot{\mathbf{x}})+V(\mathbf{x})\), where \(T(\dot{\mathbf{x}})\) represents the total kinetic energy and \(V(\mathbf{x})\) represents the potential energy of the system. The Hamiltonian equations of motion for this system in Cartesian coordinates are given by [30, 31, 32]
\[\dot{\mathbf{x}}=\nabla_{\mathbf{p}_{\mathbf{x}}}H,\qquad\dot{\mathbf{p}}_{ \mathbf{x}}=-\nabla_{\mathbf{x}}H \tag{1}\]
where \(\mathbf{p}_{\mathbf{x}}=\nabla_{\dot{x}}H=\mathbf{M}\dot{\mathbf{x}}\) represents the momentum of the system in Cartesian coordinates and \(\mathbf{M}\) represents the mass matrix. Assuming \(Z=[\mathbf{x};\mathbf{p}_{\mathbf{x}}]\) and \(J=[0,I;-I,0]\), the acceleration of a particle can be obtained from the Hamiltonian equations as
\[\dot{Z}=J(\nabla_{Z}H) \tag{2}\]
since \(\nabla_{Z}H+J\dot{Z}=0\) and \(J^{-1}=-J\). Sometimes systems may be subjected to constraints that depend on positions (holonomic) or velocities (Pfaffian). For example, in the case of a pendulum, the length between the bobs remains constant, or in multi-fingered grasping, the velocity of two fingers should be such that the combined geometry is able to hold the object. In such cases, the constrain equation is represented as \(\Phi(\mathbf{x})\dot{\mathbf{x}}=0\), where \(\Phi(\mathbf{x})\in\mathbb{R}^{k\times D}\) correspond to the \(k\) velocity constraints in a \(D\)-dimensional system. For instance, in the case of a pendulum, the constraint equation for two bobs located at \((0,0)\) and \((x_{1},x_{2})\) may be written as \(x_{1}\dot{x}_{1}+x_{2}\dot{x}_{2}=0\), which is the gradient of \(x_{1}^{2}+x_{2}^{2}=0\). Following this, the Hamiltonian equations of motion can be modified to feature the constraints explicitly as [16, 32]
\[\nabla_{Z}H+J\dot{Z}+(D_{Z}\Psi)^{T}\lambda=0 \tag{3}\]
where \(\Psi(Z)=(\Phi;\dot{\Phi})\), \(D_{Z}\Psi\) is the Jacobian of \(\Psi\) with respect to \(Z\), and \((D_{Z}\Psi)^{T}\lambda\) represents the effect of constraints on \(\dot{\mathbf{x}}\) and \(\dot{\mathbf{p}}_{\mathbf{x}}\)[16, 32]. Thus, \((D_{Z}\Psi)\dot{Z}=0\). Substituting for \(\dot{Z}\) from Eq. 3 and solving for \(\lambda\) yields [17, 25, 18, 30]
\[\lambda=-[(D_{Z}\Psi)J(D_{Z}\Psi)^{T}]^{-1}[(D_{Z}\Psi)J(\nabla_{Z}H)] \tag{4}\]
Substituting \(\lambda\) in the Eq. 3 and solving for \(\dot{Z}\) yields
\[\dot{Z}=J[\nabla_{Z}H-(D_{Z}\Psi)^{T}[(D_{Z}\Psi)J(D_{Z}\Psi)^{T}]^{-1}(D_{Z}\Psi )J\nabla_{Z}H] \tag{5}\]
Note that in the absence of constraint, Eq. 5 reduces to Eq. 2. In Hamiltonian mechanics, Eq.5 is used to obtain the acceleration of the particles, which, when integrated, provides the updated configuration of the system. Thus, the only unknown in the previous equation is the \(H\), which is represented as a function of \(\mathbf{p_{x}}\) and \(\mathbf{x}\).
### Hamiltonian graph neural network
Now, we introduce our ML framework proposed to learn the Hamiltonian of a system directly from the trajectory, that is, only using the time evolution of the observable quantities \((\mathbf{x},\mathbf{p_{x}})\). To this extent, we develop the Hamiltonian graph neural network (Hgnn) that parametrizes the actual \(H\) as a Gnn to obtain the learned \(\hat{H}\). Henceforth, all the terms with a hat, for example, \(\hat{\mathbf{x}}\) represent the approximate function obtained from Hgnn. Further, the \(\hat{H}\) obtained from Hgnn is substituted in the Eq.(5) to obtain the acceleration and velocity of the particles. These values are integrated using a symplectic integrator to compute the updated position.
First, we describe the architecture of Hgnn (see Fig. 1(a)). The physical system is modeled as an undirected graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) with nodes as particles and edges as connections between them. For instance, in an \(n\)-ball-spring system, the balls are represented as nodes and springs as edges. The raw node features are \(t\) (type of particle) as one-hot encoding, \(\mathbf{x}\), and \(\mathbf{x}\), and the raw edge feature is the distance, \(d=||\mathbf{x}_{j}-\mathbf{x}_{i}||\), between two particles \(i\) and \(j\). A notable difference in the Hgnn architecture from previous graph architectures is the presence of global and local features--local features participate in message passing and contribute to quantities that depend on topology. In contrast, global features do not
Figure 1: **Hamiltonian graph architecture and systems studied.** (a) Hamiltonian graph neural network (Hgnn) architecture, (b) Visualization of the systems studied, namely, 3-pendulum, 5-spring, \(75\)-particles binary Lennard Jones system, \(4\)-particle gravitational system, and a hybrid spring-pendulum system. Note that the hybrid spring-pendulum system is used only to evaluate the generalizability of Hgnn.
take part in message passing. Here, we employ the position \(\mathbf{x}\), velocity \(\dot{\mathbf{x}}\) as global features for a node, while \(d\) and \(t\) are used as local features.
For the Gnn, we employ an \(L\)-layer message passing Gnn, which takes an embedding of the node and edge features created by multi-layer perceptrons (MLPs) as input. Detailed hyper-parameters are provided in the Supplementary Material. The local features participate in message passing to create an updated node and edge embeddings. The final representations of the nodes and edges, \(\mathbf{z}_{i}\) and \(\mathbf{z}_{ij}\), respectively, are passed through MLPs to obtain the Hamiltonian of the system. The Hamiltonian of the system is predicted as the sum of kinetic energy \(T\) and potential energy \(V\) in the Hgnn. Specifically, the potential energy is predicted as \(V_{i}=\sum_{i}\texttt{MLP}_{v}(\mathbf{z}_{i})+\sum_{ij}\texttt{MLP}_{v}( \mathbf{z}_{ij}))\), where \(\texttt{MLP}_{v}\) and \(\texttt{MLP}_{e}\) represent the contribution from the node (particles themselves) and edges (interactions) toward the potential energy of the system, respectively. Kinetic energy is predicted as \(T=\sum_{i}\texttt{MLP}_{T}\left(\mathbf{h}_{i}^{0}\right)\), where \(\mathbf{h}_{i}^{0}\) is the embedding of particle \(i\).
To train the Hgnn, we use only the time evolution of positions and momenta. This approach does not assume any knowledge of the functional form or knowledge of the Hamiltonian. The training approach, purely based on direct observables, can be used for any system (for example, trajectories from experiments) where the true Hamiltonian is unavailable. Thus, the loss function of Hgnn is computed by using the predicted and actual positions at the timestep \(t+1\) in a trajectory based on positions and velocities at \(t\), which is then back-propagated to train the MLPs. Specifically, we use _mean squared error (MSE)_ on the true and predicted \(Z\), which is the concatenation of positions and velocities.
\[\mathcal{L}=\frac{1}{n}\left(\sum_{i=1}^{n}\left(Z_{i}^{t+1}-\hat{Z}_{i}^{t+1} \right)^{2}\right) \tag{6}\]
### Case studies
**Systems studied.** Now, we evaluate the ability of Hgnn to learn the dynamics directly from the trajectory. To evaluate Hgnn, we selected four different types of systems, _viz_, \(5\)-pendulums with explicit internal constraints and subjected to an external gravitational field, \(5\)-springs with harmonic inter-particle harmonic interactions, \(75\)-particle binary LJ system with two types of a particle interacting based on the Kob-Andersen LJ potential [33], and \(4\)-particle gravitational system with purely repulsive gravitational potential. Finally, in order to test the generalizability of Hgnn to completely unseen system which is combination two systems on which it is trained, a hybrid system containing spring and pendulum is also considered. In this system, while the dynamics of pendulum is governed by the external gravitational field, the dynamics of the spring system depends on the internal forces generated in the system due to the expansion and compression of the spring. Thus, the systems selected here covers a broad range of cases, that is, dynamics (i) with internal constraints (pendulum), (ii) under the influence of an external field (gravitational), (iii) harmonic interactions (springs), (iv) complex breakable interactions (LJ potential), and (v) hybrid system with and without internal constraints.
The training of Hgnn is carried out for each system separately. A training dataset of \(100\) trajectories, each having \(100\) steps, were used for each system. For spring and pendulum, a 5-particle system is considered with random initial conditions. In the pendulum system, the initial conditions are considered in such a fashion that the constraints are respected. In the spring system, each ball is connected only to two other balls forming a loop structure. For gravitational system, a 4-particle system is considered where two particles are rotating in the clockwise direction, and two remaining particles are rotating in the anti-clockwise direction about their center of mass. For LJ system, a binary Kob-Andersen system with 75 particles are considered. The initial structure is generated by randomly placing the particles in a box with periodic boundary conditions. Further, the systems are simulated in a microcanonical ensemble (NVE) with temperatures corresponding to the liquid state to obtain equilibrium structures. Only once the system is equilibrated, the training data is collected for this system. Hgnn models were trained on this dataset with a \(75:25\) split for training and validation. Further, to test the long-term stability and energy and momentum conservation error, the trained model was evaluated on a forward simulation for \(10^{5}\) timesteps on 100 random initial configurations. See Methods for detailed equations for the interactions, datasets, and training parameters.
**Learning the dynamics.** Now, we evaluate the performance of the trained Hgnn models. To evaluate the long-term stability of the dynamics learned by Hgnn, we analyze the trajectory predicted by Hgnn for 100 random initial configurations. Specifically, we compare the predicted and actual phase space, trajectory, kinetic energy, potential energy, and forces on all the particles of the system during the trajectory. Note that the systems studied in this case are chaotic; hence, the exact trajectory followed by Hgnn will diverge with time. However, the phase space and the errors in energy and forces can be effective metrics to analyze whether the trajectory generated by Hgnn is statistically equivalent to that of the original system, that is, sampling the same regions of the energy landscape. Further, in contrast to purely data-driven [8] or physics-informed methods, the physics-enforced architecture of Hgnn strictly follows all the characteristics of the Hamiltonian equations of motion, such as the conservation laws of energy and momentum (see Supplementary Materials). This is due to the fact that the graph architecture only predicts the Hamiltonian of the
Figure 2: **Evaluation of Hgnn on the pendulum, spring, binary LJ, and gravitational systems.** (a) Predicted and (b) actual phase space (that is, \(x_{1}\)-position vs. \(x_{2}\)-velocity), predicted with respect to actual (c) kinetic energy, (d) potential energy, and (e) forces in 1 (blue square), and 2 (red triangle) directions of the 5-pendulum system. (f) Predicted and (g) actual phase space (that is, 1-position, \(x_{1}\) vs \(2\)-velocity, \(\dot{x}_{2}\)), predicted with respect to actual (h) kinetic energy, (i) potential energy, and (j) forces in 1 (blue square) and 2 (red triangle) directions of the \(5\)-spring system. (k) Predicted and (l) actual positions (that is, \(x_{1}\) and \(x_{2}\) positions), predicted with respect to actual (m) kinetic energy, (n) pair-wise potential energy, \(V_{\rm ij}\) for the (0-0), (0-1), and (1-1) interactions, and (o) forces in 1 (blue square), 2 (red triangle), and 3 (green circle) directions of the \(75\)-particle LJ system. (p) Predicted and (q) actual positions (that is, \(x_{1}-\) and \(x_{2}-\)positions), predicted with respect to actual (r) kinetic energy, (s) potential energy, and (t) forces in 1 (blue square), and 2 (red triangle) directions of the gravitational system.
system, which is then substituted in the Hamiltonian equations of motion to obtain the updated configuration. Due to this feature, the trajectory predicted by the Hgnn is more realistic and meaningful in terms of the system's underlying physics.
Fig. 2 shows the performance of Hgnn for the pendulum (Figs. 2(a)-(e), first row), spring (Figs. 2(f)-(j), second row), binary LJ (Figs. 2(k)-(o), third row), and gravitational systems (Figs. 2(p)-(t), fourth row). For pendulum and spring systems, we observe that the phase space represented by the positions in 1-direction (\(x_{1}\)) and velocities in the orthogonal direction (\(z_{2}\)) predicted by Hgnn (Figs. 2(a) and (f)) exhibit an excellent match with the ground truth trajectory. It is interesting to note that Hgnn trained only on a trajectory of a single step (\(t\) to \(t+1\)) is able to learn the dynamics accurately and simulate a long-term stable trajectory of \(10^{5}\) timesteps that exactly matches the simulated trajectory. Similarly, for the binary LJ and gravitational systems, we observe that the predicted (Figs. 2(k) and (p)) and actual (Figs. 2(j) and (q)) positions in the trajectory of random unseen initial configurations explored by the systems exhibit an excellent match. Further, we observe that the predicted kinetic (Figs. 2(c), (h), (m), (r)) and potential (Figs. 2(d), (i), (n), and (s)) energies and forces (Figs. 2(e), (j), (o), and (t)) exhibit an excellent match with the ground truth values with a mean squared error almost close to zero. Additional evaluation of the Hgnn architecture is performed by comparing it with two baselines, namely, Hnn (which is a physics-enforced MLP) and Hgn, which does not decouple potential and kinetic energies (see Supplementary Materials) and on additional metrics such as energy and momentum error. We observe that Hgnn significantly outperforms Hgn and Hnn in terms of rollout and energy error (see Supplementary Materials). These results confirm that the Hgnn architecture can learn the systems' dynamics directly from the trajectory and hence can be used for systems where the Hamiltonian is unknown or inaccessible (such as experimental or coarse-grained systems).
**Zero-shot generalizability.** Now, we evaluate the generalizability of the Hgnn to unseen systems, for instance, systems larger than those on which Hgnn is trained or a completely new system that is a combination of two systems on which it is independently trained. While traditional neural networks based on approaches are restricted to the system sizes on which it is trained, Hgnn is inductive to larger (and smaller) systems than those on which they are trained. This is due to the modular nature of the Hgnn, thanks to the graph-based approach, where the learning occurs at the node and edge level. Fig. 3 shows the generalizability of Hgnn to larger system sizes than those on which it is trained. Specifically, we evaluate Hgnn on \(10-\)endulum (Fig. 3(a)-(e)), \(50-\)spring (Fig. 3(f)-(j)), and \(600-\)particle binary LJ systems (Fig. 3(k)-(o)). We observe that Hgnn is able to generalize to larger system sizes accurately without any additional training or fine-tuning, exhibiting excellent match with the ground truth trajectory in terms of positions, energies, and forces. Additional results on \(50\)-pendulum systems and \(500\)-spring systems are included in the Supplementary Material.
We also evaluate the ability of Hgnn to simulate a hybrid spring-pendulum system (see Fig. 1(b) Hybrid system). To this extent, we model the Hamiltonian of the hybrid as the superposition of the Hamiltonian of spring and pendulum systems. Further, we model two graphs based on the spring and pendulum elements and use the Hgnn trained on the spring and pendulum systems to obtain the Hamiltonian of the system. Fig. 3(p)-(t) shows the performance of Hgnn on the hybrid system. Hgnn provides the dynamics in excellent agreement with the ground truth for the unseen hybrid system as well in terms of positions, energies, and forces. Additional results on the force predicted on each particle by Hgnn in comparison to the ground truth for a trajectory of 100 steps is shown in Supplementary Material. These results confirm that Hgnn is able to learn the dynamics of systems directly from their trajectory and simulate the long-term dynamics for new initial conditions and system sizes. This is a highly desirable feature as Hgnn can be used to learn the Hamiltonian from sparse experimental data of physical systems or _ab-initio_ simulations of atomic systems. This learned model can then be used to simulate larger system sizes to investigate phenomena with higher length scales.
## Interpretability and discovering symbolic laws
Neural networks, while exhibiting excellent capability to learn functions, are notorious for their black-box nature allowing poor or no interpretability to the learned function. In contrast, we demonstrate the interpretability of the learned Hgnn. Thanks to the modular nature of Hgnn, we analyze the functions learned by the individual MLPs that represent the node and edge level potential energies (\(\texttt{MLP}_{v}\) and \(\texttt{MLP}_{e}\), respectively) and kinetic energy (\(\texttt{MLP}_{T}\)) of the particles as a function of the learned embeddings. Fig. 4(a)-(f) show the learned functions with respect to the input features such as positions, velocities, or inter-particle distances. We observe that learned functions by Hgnn for the potential energies for (i) pendulum bob (\(mgx_{2}\); Fig. 4(a)), (ii) spring (\(0.5k(r_{ij}-1)^{2}\); Fig. 4(c)), and (iii) binary LJ systems (0-0, 0-1, 1-1; Figs. 4(d)-(f), respectively) and kinetic energy of particles (\(0.5m|\dot{\textbf{x}}_{i}|^{2}\); Fig. 4(b)) exhibits a close match with the known governing equations. This shows the interpretability of the Hgnn and the additional ability to provide insights into the nature of interactions between the particles directly from their trajectory. Thus, Hgnn can be used to discover interaction laws directly from their trajectory, even when they are not accessible or available.
Figure 3: **Generalizability to unseen systems.** (a) Predicted and (b) actual phase space (that is, \(1-\)position vs. \(2-\)velocity) and predicted with respect to actual (c) kinetic energy, (d) potential energy, and (e) forces of the \(10-\)pendulum system, using Hønn trained on \(5-\)pendulum system. (f) Predicted and (g) actual phase space (that is, \(1-\)position, \(x_{1}\) vs. \(2-\)velocity, \(\dot{x}_{2}\)) and predicted with respect to actual (h) kinetic energy, (i) potential energy, and (j) forces of the \(50-\)spring system using Hønn trained on \(5-\)spring system. (k) Predicted and (l) actual positions (that is, \(1-\) and \(2-\)positions; blue and red represent type 0 and type 1 particles), and predicted with respect to actual (m) kinetic energy, (n) pair-wise potential energy, Vij and (o) forces, of the \(600-\)particle binary LJ system, using Hønn trained on \(75\) particle binary LJ system. (p) Predicted and (q) actual positions (that is, \(1-\) and \(2-\)positions), and predicted with respect to actual (r) kinetic energy, (s) potential energy, and (t) forces of the 10-particle hybrid system, using Hønn trained on \(5\)-spring and \(5\)-pendulum system.
While the interpretability of Hgnn can provide insights into the nature of energy functionals, abstracting it further as a symbolic expression can enable discovering the underlying interaction laws and energy functions. Such functionals can then be used for simulating the system or understanding the dynamics independent of the Hgnn. Thus, beyond learning the dynamics of systems, Hgnn can be used to discover underlying energy functionals and interaction laws. To this extent, we apply SR [2, 3, 4] on the learned functions by Hgnn. Specifically, we focus on the kinetic energy function, the harmonic function of the spring, gravitational potential, and the binary LJ systems. Specifically, we employ simple operations such as addition, multiplication, and polynomials to identify the governing equations that minimize the error between the values predicted by the discovered equation and those predicted by the Hgnn. The optimal equation is identified based on a score that balances complexity and loss of the equation (see Methods for details).
Table 1 shows the original equation and the equation discovered based on SR of the learned Hgnn functionals. Note for each system, the equation that exhibits the maximum score is chosen as the final equation (see Methods for details). All the equations discovered by SR with their loss, complexity, polynomials used, and other hyper-parameters are included in the Supplementary material. We observe that the recovered equations exhibit a close match for kinetic energy, harmonic spring, gravitational potential, and binary LJ. In the case of the binary LJ system, we observe that the equations reproduced for (0-0) and (1-1) interactions are very close to the original equation, while for (0-1) interaction, the equation is slightly different, although it exhibits low loss. Interestingly, we observe that for LJ (0-1) interaction, one of the equations provided by SR given by \(V_{ij}=\left(\frac{0.203}{r_{ij}^{12}}-\frac{0.773}{r_{ij}^{2}}\right)\) is closer to the original equation in its functional form. However, this predicted equation has a score of \(2.22\) with a loss of \(0.000109\). Thus, both the loss and the score of the equation are higher and lower, respectively, than the best equation obtained in Table 1. This also suggests that for more complex interactions, an increased number of data points, especially along the inflection points, might be required to improve the probability of discovering the original equation.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Functions & Original Eq. & Discovered Eq. & Loss & Score \\ \hline Kinetic energy & \(T_{i}=0.5m|\dot{\mathbf{x}}_{i}|^{2}\) & \(T_{i}=0.500m|\dot{\mathbf{x}}_{i}|^{2}\) & \(7.96\times 10^{-10}\) & \(22.7\) \\ Harmonic spring & \(V_{ij}=0.5(r_{ij}-1)^{2}\) & \(V_{ij}=0.499\left(r_{ij}-1.00\right)^{2}\) & \(1.13\times 10^{-9}\) & \(3.15\) \\ Binary LJ (0-0) & \(V_{ij}=\left(\frac{2.0}{r_{ij}^{12}}-\frac{2.0}{r_{ij}^{12}}\right)\) & \(V_{ij}=\left(\frac{1.90}{r_{ij}^{12}}-\frac{1.95}{r_{ij}^{12}}\right)\) & \(0.00159\) & \(2.62\) \\ Binary LJ (0-1) & \(V_{ij}=\left(\frac{0.275}{r_{ij}^{12}}-\frac{0.786}{r_{ij}^{12}}\right)\) & \(V_{ij}=\left(\frac{2.33}{r_{ij}^{2}}-\frac{2.91}{r_{ij}^{2}}\right)\) & \(3.47\times 10^{-5}\) & \(5.98\) \\ Binary LJ (1-1) & \(V_{ij}=\left(\frac{0.216}{r_{ij}^{12}}-\frac{0.464}{r_{ij}^{12}}\right)\) & \(V_{ij}=\left(\frac{0.215}{r_{ij}^{12}}-\frac{0.464}{r_{ij}^{12}}\right)\) & \(1.16\times 10^{-5}\) & \(5.41\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Discovering governing laws with symbolic regression.** Original equation and the best equation discovered by symbolic regression based on the score for different functions. The loss represents the mean squared error between the data points from Hgnn and the predicted equations.
Figure 4: **Interpreting the learned functions in Hgnn.** (a) Potential energy of pendulum system with the \(2\)-position of the bobs. (b) Kinetic energy of the particles with respect to the velocity for the pendulum bobs. (c) Potential energy with respect to the pair-wise particle distance for the spring system. (d) The pair-wise potential energy of the binary LJ system for 0-0, 0-1, and 1-1 type of particles. The results from Hgnn are shown with the markers, while the original function is shown as dotted lines.
## Outlook
Altogether, in this work, we present a framework Hgnn that allows the discovery of energy functionals directly from the trajectory of physical systems. The Hgnn could be extended to address several challenging problems where the dynamics depends on the topology such as the dynamics of polydisperse gels [34], granular materials [35], biological systems such as cells [36], or even rigid body dynamics. A topology to graph mapping can be developed in such cases which can then be used to learn the dynamics and further abstracted it out in terms of the governing interaction laws. At this juncture, it is worth mentioning some outstanding questions the present work raises. Although Hgnn presents a promising approach, it is applied to only particle-based systems with at most two-body interactions. Extending Hgnn to more complex systems, such as complex atomic structures with multi-body interactions or to deformable bodies in continuum mechanics could be addressed as future challenges. Further, the graph architecture presented in Hgnn could be enhanced by adding additional inductive biases such as equivariance [37]. Finally, extending the framework to non-Hamiltonian systems such as colloidal systems [38] exhibiting Brownian or Langevin dynamics could be pursued to widen the scope of the Hgnn framework to capture realistic systems.
## Methods
### Experimental systems
To simulate the ground truth, physics-based equations derived using Hamiltonian mechanics are employed. The equations for \(n\)-pendulum and spring systems are given in detail below.
#### \(n\)-Pendulum
For an \(n\)-pendulum system, \(n\)-point masses, representing the bobs, are connected by rigid (non-deformable) bars. These bars, thus, impose a distance constraint between two point masses as
\[||\mathbf{x}_{i}-\mathbf{x}_{i-1}||^{2}=l_{i}^{2} \tag{7}\]
where, \(l_{i}\) represents the length of the bar connecting the \((i-1)^{th}\) and \(i^{th}\) mass. This constraint can be differentiated to write in the form of a _Pfaffian_ constraint as
\[(\mathbf{x}_{i}-\mathbf{x}_{i-1})(\hat{\mathbf{x}}_{i}-\hat{\mathbf{x}}_{i-1} )=0 \tag{8}\]
Note that such constraint can be obtained for each of the \(n\) masses considered to obtain the constraint matrix.
The Hamiltonian of this system can be written as
\[H=\sum_{i=1}^{n}\sum_{j=1}^{2}\left(1/2m_{i}{x_{i,j}}^{2}-m_{i}gx_{i,2}\right) \tag{9}\]
where \(j=1,2\) represents the dimensions of the system, \(m_{i}\) represents the mass of the \(i^{th}\) particle, \(g\) represents the acceleration due to gravity in the \(2-\)direction and \(x_{i,2}\) represents the position of the \(i^{th}\) particle in the \(2-\) direction. Here, we use \(l_{i}=1.0\) m, \(m_{i}=1.0\) kg, and \(g=10.0\,m/s^{2}\).
#### \(n\)-spring system
Here, \(n\)-point masses are connected by elastic springs that deform linearly (elastically) with extension or compression. Note that similar to the pendulum setup, each mass \(m_{i}\) is connected to two masses \(m_{i-1}\) and \(m_{i+1}\) through springs so that all the masses form a closed connection. The Hamiltonian of this system is given by
\[H=\sum_{i=1}^{n}\sum_{j=1}^{2}\left(1/2m_{i}{x_{i,j}}^{2}\right)-\sum_{i=1}^{ n}1/2k(||\mathbf{x}_{i-1}-\mathbf{x}_{i}||-r_{0})^{2} \tag{10}\]
where \(r_{0}\) and \(k\) represent the undeformed length and the stiffness, respectively, of the spring, and \(j=1,2\) represents the dimensions of the system. Here, we use \(r_{0}=1.0\) m, \(m_{i}=1.0\) kg and \(k=1.0\) N/m.
### \(n\)-body gravitational system
Here, \(n\) point masses are in a gravitational field generated by the point masses themselves. The Hamiltonian of this system is given by
\[H=\sum_{i=1}^{n}\sum_{j=1}^{2}\left(1/2m_{i}x_{i,j}^{\ 2}\right)+\sum_{i=1}^{n} \sum_{k=1,j\neq i}^{n}Gm_{i}m_{j}/(||\mathbf{x}_{i}-\mathbf{x}_{j}||) \tag{11}\]
where \(G\) represents the gravitational constant, and \(j=1,2\) represents the dimension of the system. Here, we use \(G=1.0\) Nm2kg\({}^{-2}\), \(m_{i}=1.0\) kg and \(m_{j}=1.0\) kg \(\forall\ i,j\).
### Binary Lennard Jones system
Here, we consider a binary LJ system known as the Kob-Andersen mixture [33] composed of 80% particles of type 0 and 20% particles of type 1. The particles in this system interact based on a 12-6 LJ potential with the pair-wise potential energy \(V_{ij}\)given by
\[V_{ij}=\epsilon\left[\left(\frac{\sigma}{r_{ij}}\right)^{12}-\left(\frac{ \sigma}{r_{ij}}\right)^{6}\right] \tag{12}\]
where \(r_{ij}=||\mathbf{x}_{i}-\mathbf{x}_{j}||\) and \(\sigma\) and \(\epsilon\) are the LJ parameters, which takes the values as \(\epsilon_{0-0}=1.0\), \(\epsilon_{0-1}=1.5\), \(\epsilon_{1-1}=0.5\) and \(\sigma_{0-0}=1.00\), \(\sigma_{0-1}=0.80\), \(\sigma_{1-1}=0.88\), and \(r_{ij}\) represents the distance between particles \(i\) and \(j\). The pair-wise interaction energy between all the particles is summed to obtain the total energy of the system. For the LJ system, all the simulations are conducted at a temperature of 1.2 in the microcanonical (NVE) ensemble, ensuring the system is in a liquid state. The system is initialized with atoms placed in random positions avoiding overlap in a cubic box with periodic boundary conditions with box size \(3.968\) and cutoff for atom type \(0-0=2.5\), \(0-1=2.0\) and \(1-1=2.2\). Further, the system is equilibrated in the NVE ensemble until the memory of the initial configuration is lost. The equations of motion are integrated with the velocity Verlet algorithm.
### Gnn architecture
**Pre-Processing:** In the pre-processing layer, we generate a compact vector representation for particle and their interactions \(e_{ij}\) by employing Multi-Layer Perceptrons.
\[\mathbf{h}_{i}^{0} =\texttt{squareplus}(\texttt{MLP}_{em}(\texttt{one-hot}(t_{i}))) \tag{13}\] \[\mathbf{h}_{ij}^{0} =\texttt{squareplus}(\texttt{MLP}_{em}(e_{ij})) \tag{14}\]
Here, \(\texttt{squareplus}\) is an activation function. In our implementation, we use different \(\texttt{MLP}_{em}\)s for node representation corresponding to kinetic energy, potential energy, and drag. For brevity, we do not separately write the \(\texttt{MLP}_{em}\)s in Eq. 13.
### Kinetic energy and drag prediction.
Given that the graph employs Cartesian coordinates, the mass matrix can be represented as a diagonal matrix. Consequently, the kinetic energy (\(\tau_{i}\)) of a particle relies exclusively on the velocity (\(\hat{\mathbf{x}}_{i}\)) and mass (\(m_{i}\)) of said particle. In this context, the parameterized masses for each particle type are acquired through the utilization of the embedding (\(\mathbf{h}_{i}^{0}\)). As such, the predicted value of \(\tau_{i}\) for a given particle is determined by \(\tau_{i}=\texttt{squareplus}(\texttt{MLP}_{T}(\mathbf{h}_{i}^{0}\ ||\ \hat{\mathbf{x}}_{i}))\), where the symbol \(\|\) denotes the concatenation operator. In this equation, \(\texttt{MLP}_{T}\) denotes a multilayer perceptron responsible for learning the kinetic energy function, while \(\texttt{squareplus}\) represents the activation function employed. The overall kinetic energy of the system, denoted by \(T\), is calculated as the sum of individual kinetic energies: \(T=\sum_{i=1}^{n}\tau_{i}\).
**Potential energy prediction.** Typically, the potential energy of a system exhibits significant dependence on the topology of its underlying structure. In order to effectively capture this information, we utilize a multiple layers of message-passing among interacting particles (nodes). During the \(l^{th}\) layer of message passing, the node embeddings are iteratively updated according to the following expression:
\[\mathbf{h}_{i}^{l+1}=\texttt{squareplus}\left(\texttt{MLP}\left(\mathbf{h}_{i }^{l}+\sum_{j\in\mathcal{N}_{i}}\mathbf{W}_{\mathcal{V}}^{l}\cdot\left(\mathbf{ h}_{j}^{l}||\mathbf{h}_{ij}^{l}\right)\right)\right) \tag{15}\]
where, \(\mathcal{N}_{i}=\{u_{j}\in\mathcal{V}\ |\ (u_{i},u_{j})\in\mathcal{E}\}\) is the set of neighbors of particle \(u_{i}\). \(\mathbf{W}_{\mathcal{V}}^{l}\) is a layer-specific learnable weight matrix. \(\mathbf{h}_{ij}^{l}\) represents the embedding of incoming edge \(e_{ij}\) on \(u_{i}\) in the \(l^{th}\) layer, which is computed as follows.
\[\mathbf{h}_{ij}^{l+1}=\texttt{squareplus}\left(\texttt{MLP}\left(\mathbf{h}_{ ij}^{l}+\mathbf{W}_{\mathcal{E}}^{l}\cdot\left(\mathbf{h}_{i}^{l}||\mathbf{h}_{j}^{l} \right)\right)\right) \tag{16}\]
Similar to \(\mathbf{W}_{\mathcal{V}}^{l}\), \(\mathbf{W}_{\mathcal{E}}^{l}\) is a layer-specific learnable weight matrix specific to the edge set. The message passing is performed over \(L\) layers, where \(L\) is a hyper-parameter. The final node and edge representations in the \(L^{th}\) layer are denoted as \(\mathbf{z}_{i}=\mathbf{h}_{i}^{L}\) and \(\mathbf{z}_{ij}=\mathbf{h}_{ij}^{L}\) respectively.
The total potential energy of an \(n\)-body system is represented as \(V=\sum_{i}v_{i}+\sum_{ij}v_{ij}\). Here, \(v_{i}\) denotes the energy associated with the position of particle \(i\), while \(v_{ij}\) represents the energy arising from the interaction between particles \(i\) and \(j\). For instance, \(v_{i}\) corresponds to the potential energy of a bob in a double pendulum, considering its position within a gravitational field. On the other hand, \(v_{ij}\) signifies the energy associated with the expansion and contraction of a spring connecting two particles. In the proposed framework, the prediction for \(v_{i}\) is given by \(v_{i}=\texttt{squareplus}(\texttt{MLP}_{v_{i}}(\mathbf{h}_{i}^{0}\parallel \mathbf{x}_{i}))\). Similarly, the prediction for the pair-wise interaction energy \(v_{ij}\) is determined by \(v_{ij}=\texttt{squareplus}(\texttt{MLP}_{v_{ij}}(\mathbf{z}_{ij}))\).
The parameters of the model are trained end-to-end using the MSE loss discussed in Eq. 6.
#### Model architecture and training setup
For Hgnn, all the MLPs are two layers deep. A square plus activation function is used for all the MLPs. We used 10000 data points from 100 trajectories divided into 75:25 (train: validation) to train all the models. The timestep used for the forward simulation of the pendulum system is \(10^{-5}s\), for the spring and gravitational system is \(10^{-3}s\), and for the LJ system is 0.0001 LJ units. All the equations of motion are integrated with the velocity-Verlet integrator. Detailed training procedures and hyper-parameters are provided in the Supplementary material. All models were trained until the decrease in loss saturates to less than 0.001 over 100 epochs. The model performance is evaluated on a forward trajectory, a task it was not explicitly trained for, of \(10s\) in the case of the pendulum and \(20s\) in the case of spring. Note that this trajectory is 2-3 orders of magnitude larger than the training trajectories from which the data has been sampled. The dynamics of \(n\)-body system are known to be chaotic for \(n\geq 2\). Hence, all the results are averaged over trajectories generated from 100 different initial conditions.
#### Symbolic regression
SR refers to an approach to search for equations that fit the data points and fit them rather than a parametric approach where an equation is chosen apriori to fit the data. Here, we employ the PySR package to perform the SR [7]. PySR employs a tree-based approach for fitting the governing equation based on the operations and variables provided. Since the parametric space available for SR can be too large with every additional operation, it is important to carefully provide the minimum required input features and the operations while providing meaningful constraints on the search space.
In the present work, we choose the addition and multiplication operation. Further, we allow polynomial fit based on a set containing (square, cube, pow(n)) operations, where pow(n) refers to power from four to ten. The loss function to fit the SR is based on the mean squared error between the predicted equation and the data points obtained from Hgnn. Further, the equations are selected based on a score \(S\) that balances complexity \(C\) and loss \(L\). Specifically, the score is defined as \(S=\frac{dL}{dC}\), that is, the gradient of the loss with respect to complexity. For each set of hyperparameters, we select the top 10 equations based on the scores. Further, the equation having the best score among these equations is chosen as the optimal equation. All the hyperparameters associated with the SR and the corresponding equations obtained are included in the Supplementary material.
#### Simulation environment
All the simulations and training were carried out in the JAX environment [39, 40]. The graph architecture was developed using the jraph package [41]. The experiments were conducted on a machine with Apple M1 chip having 8GB RAM and running MacOS Monterey. **Software packages:** numpy-1.22.1, jax-0.3.0, jax-md-0.1.20, jaxlib-0.3.0, jraph-0.0.2.dev **Hardware:** Chip: Apple M1, Total Number of Cores: 8 (4 performance and 4 efficiency), Memory: 8 GB, System Firmware Version: 7459.101.3, OS Loader Version: 7459.101.3
|
2303.09154 | Bayesian Generalization Error in Linear Neural Networks with Concept
Bottleneck Structure and Multitask Formulation | Concept bottleneck model (CBM) is a ubiquitous method that can interpret
neural networks using concepts. In CBM, concepts are inserted between the
output layer and the last intermediate layer as observable values. This helps
in understanding the reason behind the outputs generated by the neural
networks: the weights corresponding to the concepts from the last hidden layer
to the output layer. However, it has not yet been possible to understand the
behavior of the generalization error in CBM since a neural network is a
singular statistical model in general. When the model is singular, a one to one
map from the parameters to probability distributions cannot be created. This
non-identifiability makes it difficult to analyze the generalization
performance. In this study, we mathematically clarify the Bayesian
generalization error and free energy of CBM when its architecture is
three-layered linear neural networks. We also consider a multitask problem
where the neural network outputs not only the original output but also the
concepts. The results show that CBM drastically changes the behavior of the
parameter region and the Bayesian generalization error in three-layered linear
neural networks as compared with the standard version, whereas the multitask
formulation does not. | Naoki Hayashi, Yoshihide Sawada | 2023-03-16T08:34:56Z | http://arxiv.org/abs/2303.09154v1 | Bayesian Generalization Error in Linear Neural Networks with Concept Bottleneck Structure and Multitask Formulation
###### Abstract
Concept bottleneck model (CBM) is a ubiquitous method that can interpret neural networks using concepts. In CBM, concepts are inserted between the output layer and the last intermediate layer as observable values. This helps in understanding the reason behind the outputs generated by the neural networks: the weights corresponding to the concepts from the last hidden layer to the output layer. However, it has not yet been possible to understand the behavior of the generalization error in CBM since a neural network is a singular statistical model in general. When the model is singular, a one to one map from the parameters to probability distributions cannot be created. This non-identifiability makes it difficult to analyze the generalization performance. In this study, we mathematically clarify the Bayesian generalization error and free energy of CBM when its architecture is three-layered linear neural networks. We also consider a multitask problem where the neural network outputs not only the original output but also the concepts. The results show that CBM drastically changes the behavior of the parameter region and the Bayesian generalization error in three-layered linear neural networks as compared with the standard version, whereas the multitask formulation does not.
## 1 Introduction
Artificial neural networks have been advancing and widely applied in many fields since the time when multi-layer perceptrons first emerged [17, 12]. However, since most neural networks are black-boxes, interpreting the outputs of neural networks is necessary. Hence, various procedures have been proposed to improve output interpretability [40]. One of the network architectures used to explain the behaviors of neural networks is the concept bottleneck model (CBM) [31, 32, 28]. CBM has a novel structure, called a concept bottleneck structure, where the concepts are inserted between the output layer, and the last intermediate layer as observed values and the last connection from the concepts to the output is linear. Thus, humans are expected to be able to interpret the weights of the last connection as the effect of the specified concept to the output, similar to coefficients of linear regression. For instance, following [28], when we predict the knee arthritis grades of patients by using x-ray images and a CBM, we set the concepts as clinical findings corrected by medical doctors, and thereby understand how clinical findings affect the predicted grades based on the correlation, by observing the learned weights in the last connection. Concept-based interpretation is used in knowledge discovery for chess [39], video representation [44], medical imaging [25], clinical risk prediction [45], computer aided diagnosis [27], and other healthcare domain problems [10]. CBM is a significant foundation for these applications, and advanced methods [49, 44, 27] have been proposed based on CBM. Hence, it is important to clarify the theoretical behavior of CBM.
Multitask formulation (Multitask) [61] is also needed to clarify the performance difference as compared to CBM because the former can output a vector concatenating the original output and concepts instead of inserting concepts into the intermediate layer; CBM and Multitask use similar types of data (inputs, concepts, and outputs). Their interpretations are as well; CBM obtains explanation based on regression between concepts and outputs, and Multitask obtains these from their co-occurrence.
Although some limitations of CBM have been investigated [35, 36, 34], its generalization error has not yet been clarified except for a simple analysis that was conducted using the least squares method of a three-layered linear and independent CBM in [28]. That of Multitask also remains unknown. This is because, in general, neural networks are non-identifiable, i.e. it is impossible to map from the parameter to the probabilistic distribution which represents the model. One calls such a model a singular statistical model [55, 58], since any normal distribution cannot approximate its likelihood and posterior distribution and its Fisher information matrix has non-positive eigenvalues. For singular statistical models, it has been proved that Bayesian inference is a better learning method than maximum likelihood or posterior estimation in terms of generalization performance [55, 58]. In the following, we therefore mainly consider Bayesian inference.
A regular statistical model is defined as having parameters that are injective to probability density functions. This situation is stated as the one in which the model is regular. Otherwise, the model is singular. Let \(d\) be the parameter dimension and \(n\) be the sample size. In a regular statistical model, its expected generalization error is asymptotically equal to \(d/2n+o(1/n)\), where the generalization error is the Kullback-Leibler divergence from the data-generating distribution to the predictive distribution [1, 3, 2]. Moreover, its negative log marginal likelihood (a.k.a. free energy) has asymptotic expansion represented by \(nS_{n}+(d/2)\log n+O_{p}(1)\), where \(S_{n}\) is the empirical entropy [50]. In general case, i.e. the case models can be singular, Watanabe had proved that the asymptotic form of its generalization error \(G_{n}\) and free energy \(F_{n}\) are the followings [53, 54, 55]:
\[\mathbb{E}_{n}[G_{n}] =\frac{\lambda}{n}-\frac{m-1}{n\log n}+o\left(\frac{1}{n\log n} \right), \tag{1}\] \[F_{n} =nS_{n}+\lambda\log n-(m-1)\log\log n+O_{p}(1), \tag{2}\]
where \(\lambda\) is a positive rational number, \(m\) is a positive integer, and \(\mathbb{E}_{n}[\cdot]\) is an expectation operator on the overall dataset, respectively. The constant \(\lambda\) is called a learning coefficient since it is dominant in the leading term of (1) and (2), which represent the \(\mathbb{E}_{n}[G_{n}]\)-\(n\) and \(F_{n}\)-\(n\) learning curves. The above forms hold not only in the case where the model is regular but also in the case where the model is singular. The generalization loss is defined by the cross entropy between the data-generating distribution and the statistical model and equal to \(S+G_{n}\), where \(S\) is the entropy of the data-generating distribution. Watanabe developed this theory and proposed two model-evaluation methods, WAIC [56] and WBIC [57], which can estimate \(S+G_{n}\) and \(F_{n}\) of regular and singular models from the model and data, respectively.
Let \(K:\mathcal{W}\rightarrow\mathbb{R}\), \(w\mapsto K(w)\) be the Kullback-Leibler (KL) divergence between the data-generating distribution to the statistical model, where \(\mathcal{W}\subset\mathbb{R}^{d}\) is the parameter set and \(w\in\mathcal{W}\). Assume that \(\mathcal{W}\) is a sufficiently large compact set, \(K(w)\) is analytic, and its range is non-negative. The constants \(\lambda\) and \(m\) are characterized by singularities in the set of the zero points of the KL divergence: \(K^{-1}(0)\), which is an analytic set (a.k.a. algebraic variety). They can be calculated by the resolution of singularities [24, 7]; \(\lambda\) is called a real log canonical threshold (RLCT) and \(m\) is called a multiplicity in algebraic geometry. Note that they are birational invariants; \(\lambda\) and \(m\) do not depend on how singularities can be resolved. Here, suppose the prior is positive and bounded on \(K^{-1}(0)\). In the regular case, we can derive \(\lambda=d/2\) and \(m=1\). Besides, the predictive distribution can not only be Bayesian posterior predictive but also the model whose parameters are maximum likelihood or posterior estimator. However, in the singular case, the RLCT \(\lambda\) and the multiplicity \(m\) depend on the model, and Bayesian inference is significantly different from the maximum likelihood or posterior estimation. These situations can occur in both CBM and Multitask.
Determining RLCTs is important for estimating the sufficient sample size, constructing procedures of learning, and selecting models. In fact, RLCTs of many singular models have been studied: mixture models [63, 65, 47, 37, 60], Boltzmann machines [67, 4, 5], non-negative matrix factorization [22, 21, 18], latent class analysis [13], latent Dirichlet allocation [23, 19], naive Bayesian networks [46], Bayesian networks [64],
Markov models [69], hidden Markov models [66], linear dynamical systems [42], Gaussian latent tree and forest models [14], and three-layered neural networks whose activation function is linear [6], analytic-odd (like tanh) [54], and Swish [52]. Additionally, a model selection method called sBIC, which uses RLCTs of statistical models, was proposed by Drton and Plummer [15]. Furthermore, Drton and Imai empirically demonstrated that sBIC is more precise than WBIC in terms of selecting the correct model when the RLCTs are precisely clarified or their tight bounds are available [15, 14, 26]. In addition, Imai proposed an estimating method for RLCTs from data and model and extended sBIC [26]. Other application of RLCTs is a design procedure for exchange probability in the exchange Monte Carlo method proposed by Nagata [41].
The RLCT of a neural network without the concept bottleneck structure, called Standard in [28], is exactly clarified in [6] in the case of a three-layered linear neural network. However, the RLCTs of CBM and Multitask remain unknown. In other words, even if the model structure is three-layered linear, Bayesian generalization errors and marginal likelihoods in CBM and Multitask have not yet been clarified. Furthermore, since we treat models in the interpretable machine learning field, our result suggests that singular learning theory is useful for establishing a foundation of responsible artificial/computational intelligence. This is because interpretation methods can often be referred to restrictions of the parameter space for the model. Parameter constrain might have to change whether the model is regular/singular [16]. There are perspectives focusing parameter restriction to analyze singular statistical models [16, 20]. For example, in non-negative matrix factorization (NMF) [43, 33, 9], parameters are elements of factorized matrices and they are restricted to non-negative regions for improvement interpretablity of factorization result, like purchase factors of customers and degrees of interests for each product gotten from item-user tables [29, 30]. If they could be negative, owing to canceling positive/negative elements, estimating the popularity of products and potential demand of customers would become difficult. The Bayesian generalization error and the free energy of NMF differently behave from non-restricted matrix factorization [22, 21, 18, 6]. Therefore, restriction to parameter space on account of interpretation methods essentially affects learning behavior of the model and they can be analyzed by studying singular learning theory.
In this study, we mathematically derive the exact asymptotic forms of the Bayesian generalization error and the free energy by finding the RLCT of the neural network with CBM if the structure is three-layered linear. We also clarify the RLCT of Multitask in that case and compare their theoretical behaviors of CBM with that of Multitask and Standard. The rest of this paper has four parts. In section 2, we describe the framework of Bayesian inference and how to validate the model if the data-generating distribution is not known. In section 3, we state Main Theorems. In section 4, we expand Main Theorems for categorical variables. In section 5, we discuss about this theoretical result. In section 6, we conclude this paper. Besides, there are two appendices. In A, we explain a mathematical theory of Bayesian inference when the data-generating distribution is unknown. This theory is the foundation of our study and is called singular learning theory. In B, we prove Main Theorems, their expanded results, and a proposition for comparison of the RLCT of CBM and Multitask.
## 2 Framework of Bayesian Inference
Let \(X^{n}=(X_{1},\ldots,X_{n})\) be a collection of random variables of \(n\) independent and identically distributed from a data generating distribution. The function value of \(X_{i}\) is in \(\mathcal{X}\) which is a subset of a finite-dimensional real Euclidean or discrete space. In this article, the collection \(X^{n}\) is called the dataset or the sample and its element \(X_{i}\) is called the (\(i\)-th) data. Besides, let \(q:\mathcal{X}\to\mathbb{R}\), \(x\mapsto q(x)\), \(p(\cdot|w):\mathcal{X}\to\mathbb{R}\), \(x\mapsto p(x|w)\), and \(\varphi:\mathcal{W}\to\mathbb{R}\), \(w\mapsto\varphi(w)\) be the probability densities of a data-generating distribution, a statistical model, and a prior distribution, respectively. Note that the parameter \(w\) and its set \(\mathcal{W}\) are defined as in the above section.
We define a posterior distribution as the distribution whose density is the following function on \(\mathcal{W}\):
\[\varphi^{*}(w|X^{n})=\frac{1}{Z_{n}}\varphi(w)\prod_{i=1}^{n}p(X_{i}|w), \tag{3}\]
where \(Z_{n}\) is a normalizing constant used to satisfy the condition \(\int\varphi^{*}(w|X^{n})dw=1\):
\[Z_{n}=\int dw\varphi(w)\prod_{i=1}^{n}p(X_{i}|w). \tag{4}\]
This is called a marginal likelihood or a partition function. Its negative log value is called a free energy \(F_{n}=-\log Z_{n}\). Note that the marginal likelihood is a probability density function of a dataset. The free energy appears in a leading term of the difference between the data-generating distribution and the model in the sense of dataset generating process. Furthermore, a predictive distribution is defined by the following density function on \(\mathcal{X}\):
\[p^{*}(x|X^{n})=\int dw\varphi^{*}(w|X^{n})p(x|w). \tag{5}\]
This is a probability distribution of a new data. It is also important for statistics and machine learning to evaluate the dissimilarity between the true and the model in the sense of a new data generating process.
Here, we explain the evaluation criteria for Bayesian inference. The KL divergence between the data-generating distribution to the statistical model is denoted by
\[K(w)=\int dxq(x)\log\frac{q(x)}{p(x|w)}. \tag{6}\]
As technical assumptions, we suppose the parameter set \(\mathcal{W}\subset\mathbb{R}^{d}\) is sufficiently wide and compact and the prior is positive and bounded on
\[K^{-1}(0):=\{w\in\mathcal{W}\mid K(w)=0\}, \tag{7}\]
i.e. \(0<\varphi(w)<\infty\) for any \(w\in K^{-1}(0)\). In addition, we assume that \(\varphi(w)\) is a \(C^{\infty}\)-function on \(\mathcal{W}\) and \(K(w)\) is an analytic function on \(\mathcal{W}\). An entropy of \(q(x)\) and an empirical one are denoted by
\[S =-\int dxq(x)\log q(x), \tag{8}\] \[S_{n} =-\frac{1}{n}\sum_{i=1}^{n}\log q(X_{i}). \tag{9}\]
By definition, \(X_{i}\sim q(x)\) and \(X^{n}\sim\prod_{i=1}^{n}q(x_{i})\) hold; thus let \(\mathbb{E}_{n}[\cdot]\) be an expectation operator on overall dataset defined by
\[\mathbb{E}_{n}[\cdot]=\int dx^{n}\prod_{i=1}^{n}q(x_{i})[\cdot], \tag{10}\]
where \(dx^{n}=dx_{1}\ldots dx_{n}\). Then, we have the following KL divergence
\[\int dx^{n}\prod_{i=1}^{n}q(x_{i})\log\frac{\prod_{i=1}^{n}q(x_{i })}{Z_{n}} =-\mathbb{E}_{n}\left[nS_{n}\right]-\mathbb{E}_{n}[\log Z_{n}] \tag{11}\] \[=-nS+\mathbb{E}_{n}[F_{n}], \tag{12}\]
where \(\mathbb{E}_{n}[S_{n}]\) is \(S\). The expected free energy is an only term that depends on the model and the prior. For this reason, the free energy is used as a criterion to select the model. On the other hand, a Bayesian generalization error \(G_{n}\) is defined by the KL divergence between the data-generating distribution and the predictive one:
\[G_{n}=\int dxq(x)\log\frac{q(x)}{p^{*}(x|X^{n})}. \tag{13}\]
Here, the Bayesian inference is defined by inferring that the data-generating distribution may be the predictive one. For an arbitrary finite \(n\), by the definition of marginal likelihood (4) and predictive distribution
(5), we have
\[p^{*}(X_{n+1}|X^{n}) =\frac{1}{Z_{n}}\int dw\varphi(w)\prod_{i=1}^{n}p(X_{i}|w)p(X_{n+1}|w) \tag{14}\] \[=\frac{1}{Z_{n}}\int dw\varphi(w)\prod_{i=1}^{n+1}p(X_{i}|w)\] (15) \[=\frac{Z_{n+1}}{Z_{n}}. \tag{16}\]
Considering expected negative log values of both sides, according to [55], we get
\[\mathbb{E}_{n+1}[-\log p^{*}(X_{n+1}|X^{n})] =\mathbb{E}_{n+1}[-\log Z_{n+1}-(-\log Z_{n})] \tag{17}\] \[\mathbb{E}_{n}[G_{n}]+S =\mathbb{E}_{n+1}[F_{n+1}]-\mathbb{E}_{n}[F_{n}]. \tag{18}\]
Hence, \(G_{n}\) and \(F_{n}\) are important random variables in Bayesian inference when the data-generating process is unknown. The situation wherein \(q(x)\) is unknown is considered generic [38, 59]. Moreover, in general, the model can be singular when it has a hierarchical structure or latent variables [55, 58]. We therefore investigate how they asymptotically behave in the case of CBM and Multitask. For theoretically treating the case when the model is singular, resolution of singularity in algebraic geometry is needed. Brief introduction of this theory is in A.
## 3 Main Theorems
In this section, we state the Main Theorem: the exact value of the RLCTs of CBM and Multitask. Let \(n\) be the sample size, \(N\) be the input dimension, \(K\) be the number of concepts, and \(M\) be the output dimension, respectively. For simplicity, we consider CBM for the regression case and the concept is a real number: the input data \(x\) is an \(N\)-dimensional vector, the concept \(c\) is an \(K\)-dimensional vector, and the output data \(y\) is an \(M\)-dimensional vector, respectively. We consider the case in which \(c\) or \(y\) includes categorical variables as described in section 4. Let \(A=(a_{ik})_{i=1,k=1}^{M,K}\) and \(B=(b_{kj})_{k=1,j=1}^{K,N}\) be \(M\times K\) and \(K\times N\) matrix, respectively. They are the connection weights of CBM: \(y=ABx\) and \(c=Bx\). Similar to CBM, we consider Multitask. Let \(H\) be the number of units in the intermediate layer in this model. Matrices \(U=(u_{ik})_{i=1,k=1}^{M+K,H}\) and \(V=(v_{kj})_{k=1,j=1}^{H,N}\) are denoted by connection weights of Multitask: \([y;c]=UVx\), where \([y;c]\) is an \((M+K)\)-dimensional vector constructed by concatenating \(y\) and \(c\) as a column vector, i.e. putting \(z=[y;c]\), we have
\[z=(z_{h})_{h=1}^{M+K},\ z_{h}=\begin{cases}y_{h}&1\leqq h\leqq M,\\ c_{h-M+1}&M+1\leqq h\leqq M+K.\end{cases} \tag{19}\]
For the notation, see also Table 1. Below, we apply the \([\cdot;\cdot]\) operator to matrices and vectors to vertically concatenate them in the same way as above.
Define the RLCT of CBM and that of Multitask in the below. We consider neural networks in the case they are three-layered linear. First, we state these model structures.
**Definition 3.1** (Cbm).: _Let \(q_{1}(y,c|x)\) and \(p_{1}(y,c|A,B,x)\) be conditional probability density functions of \((y,c)\in\mathbb{R}^{M}\times\mathbb{R}^{K}\) given \(x\in\mathbb{R}^{N}\) as the followings:_
\[q_{1}(y,c|x) =p_{1}(y,c|A_{0},B_{0},x), \tag{20}\] \[p_{1}(y,c|A,B,x) \propto\exp\left(-\frac{1}{2}\|y-ABx\|^{2}\right)\exp\left(-\frac{ \gamma}{2}\|c-Bx\|^{2}\right), \tag{21}\]
_where \(A_{0}=(a_{kk}^{0})_{i=1,k=1}^{M,K}\) and \(B_{0}=(b_{kj}^{0})_{k=1,j=1}^{K,N}\) are the true parameters and \(\gamma>0\) is a positive constant controlling task and explanation tradeoff [28]. The prior density function is denoted by \(\varphi_{1}(A,B)\). The data-generating distribution of CBM and the statistical model of that are defined by \(q_{1}(y,c|x)\) and \(p_{1}(y,c|A,B,x)\), respectively._
These distributions are based on the loss function of Joint CBM, which provides the highest classification performance [28]. Other types of CBM are discussed in section 5. This loss is defined by the linear combination of the loss between \(y\) and \(ABx\) and that between \(c\) and \(Bx\). For regression, these losses are squared Euclidian distances and the linear combination of them are equivalent to negative log likelihoods \(\left(-\log\prod_{l=1}^{n}p_{1}(y^{l},c^{l}|A,B,x^{l})\right)\), where the data is \((x^{l},c^{l},y^{l})_{l=1}^{n}\in(\mathbb{R}^{N}\times\mathbb{R}^{K}\times \mathbb{R}^{M})^{n}\). Since CBM assumes that concepts are observable, \(c\) is subject to the data-generating distribution and that causes that the number of columns in \(A_{0}\) and rows in \(B_{0}\) are equal to \(K\): the number of concepts.
Set a density as \(p_{12}(y|A,B,x)\propto\exp\left(-\frac{1}{2}\|y-ABx\|^{2}\right)\). Then, the statistical model of Standard is \(p_{12}(y|A,B,x)\) and the data-generating distribution of Standard can be represented as \(q_{12}(y|x):=p_{12}(y|A_{0},B_{0},x)\); however, when considering only Standard, the rank of \(A_{0}B_{0}\) might be smaller than \(K\), i.e. there exists a pair of matrices \((A_{0}^{\prime},B_{0}^{\prime})\) such that \(A_{0}B_{0}=A_{0}^{\prime}B_{0}^{\prime}\) and \(\text{rank}A_{0}^{\prime}B_{0}^{\prime}<K\). In other words, if we cannot observe concepts, then a model selection problem can be said to exist: in other words, how do we design the number of middle units in order to find the data-generating distribution or a predictive distribution which realizes high generalization performance. This problem also appears in Multitask when the number of the middle layer units exceeds the true rank (i.e. the true number of those units): \(H>H_{0}\). Distributions of Multitask are defined below.
**Definition 3.2** (Multitask).: _Put \(z=[y;c]\). Let \(q_{2}(z|x)\) and \(p_{2}(z|U,V,x)\) be conditional probability density functions of \(z\in\mathbb{R}^{M+K}\) given \(x\in\mathbb{R}^{N}\) as below:_
\[q_{2}(z|x) =p_{2}(z|U_{0},V_{0},x), \tag{22}\] \[p_{2}(z|U,V,x) \propto\exp\left(-\frac{1}{2}\|z-UVx\|^{2}\right), \tag{23}\]
_where \(U_{0}=(v_{ik}^{0})_{i=1,k=1}^{M+K,H_{0}}\) and \(V_{0}=(v_{kj}^{0})_{k=1,j=1}^{H_{0},N}\) are the true parameters and \(H_{0}\) is the rank of \(U_{0}V_{0}\). The prior density function is denoted by \(\varphi_{2}(U,V)\). The data-generating distribution of Multitask and the statistical model of that are defined by \(q_{2}(z|x)\) and \(p_{2}(z|U,V,x)\), respectively._
The data-generating distribution and the statistical model of Standard are defined by \(q_{2}(y|x)\) and \(p_{2}(y|U,V,x)\) of Multitask when \(K=0\).
We visualize CBM and Multitask as graphical models in Figures 0(a) and 0(b). In CBM (especially, Joint CBM), the concepts are inserted as observations between the last intermediate layer and the output layer but the connection weights \(A\) from \(c\) to \(y\) are learned based on the relationship \(y=ABx\). Concepts insertion is represented as the other part of the model: \(c=Bx\). However, in Multitask, the concepts are concatenated to the output and the connection weights \((U,V)\) are trained as \([y;c]=UVx\).
\begin{table}
\begin{tabular}{|c|c|c|} \hline Variable & Description & Index \\ \hline \hline \(b_{j}=(b_{kj})\in\mathbb{R}^{K}\) & connection weights from \(x\) to \(c\) & for \(k=1,\ldots,K\) \\ \(a_{k}=(a_{ik})\in\mathbb{R}^{M}\) & connection weights from \(c\) to \(y\) & for \(i=1,\ldots,M\) \\ \hline \(v_{j}=(v_{kj})\in\mathbb{R}^{H}\) & connection weights from \(x\) to the middle layer & for \(k=1,\ldots,H\) \\ \(u_{k}=(u_{ik})\in\mathbb{R}^{M+K}\) & connection weights from the middle layer to \(z\) & for \(i=1,\ldots,M+K\) \\ \hline \(x=(x_{j})\in\mathbb{R}^{N}\) & \(j\)-th input is \(x_{j}\) & for \(j=1,\ldots,N\) \\ \(c=(c_{k})\in\mathbb{R}^{K}\) & \(k\)-th concept is \(c_{k}\) & for \(k=1,\ldots,K\) \\ \(y=(y_{i})\in\mathbb{R}^{M}\) & \(i\)-th output is \(y_{i}\) & for \(i=1,\ldots,M\) \\ \(z=(z_{h})\in\mathbb{R}^{M+K}\) & \(h\)-th output of Multitask is \(z_{h}\) & for \(h=1,\ldots,M+K\) \\ \hline \(*_{0}\) and \(*^{0}\) & optimal or true variable corresponding to \(*\) & - \\ \hline \end{tabular}
\end{table}
Table 1: Description of the Variables
We define the RLCTs of CBM and Multitask as follows.
**Definition 3.3** (RLCT of CBM).: _Let \(K_{1}(A,B)\) be the KL divergence between \(q_{1}\) and \(p_{1}\):_
\[K_{1}(A,B)=\iiint dydcdxq^{\prime}(x)q_{1}(y,c|x)\log\frac{q_{1}(y,c|x)}{p_{1}(y,c|A,B,x)}, \tag{24}\]
_where \(q^{\prime}(x)\) is the data-generating distribution of the input. \(q^{\prime}(x)\) is not observed and assumed that it is positive and bounded. Assume that \(\varphi_{1}(A,B)>0\) is positive and bounded on \(K_{1}^{-1}(0)\ni(A_{0},B_{0})\). Then, the zeta function of the learning theory in CBM is the holomorphic function of univariate complex variable \(z\)\((\mathrm{Re}(z)>0)\)_
\[\zeta_{1}(z)=\iint K_{1}(A,B)^{z}dAdB \tag{25}\]
_and it can be analytically continued to a unique meromorphic function on the entire complex plane \(\mathbb{C}\) and all of its poles are negative rational numbers. The RLCT of CBM is defined by \(\lambda_{1}\), where the largest pole of \(\zeta_{1}(z)\) is \((-\lambda_{1})\). Its multiplicity \(m_{1}\) is defined as the order of the maximum pole._
**Definition 3.4** (RLCT of Multitask).: _Let \(K_{2}(U,V)\) be the KL divergence between \(q_{2}\) and \(p_{2}\):_
\[K_{2}(U,V)=\iiint dydcdxq^{\prime}(x)q_{2}(y,c|x)\log\frac{q_{2}(y,c|x)}{p_{2}( y,c|U,V,x)}, \tag{26}\]
_where \(q^{\prime}(x)\) is same as that of Definition 3.3. Assume that \(\varphi_{2}(U,V)>0\) is positive and bounded on \(K_{2}^{-1}(0)\ni(U_{0},V_{0})\). As in the case of CBM, the zeta function of learning theory in Multitask is the following holomorphic function of \(z\)\((\mathrm{Re}(z)>0)\)_
\[\zeta_{2}(z)=\iint K_{2}(U,V)^{z}dUdV \tag{27}\]
Figure 1: (a) This figure shows the graphical model of CBM (in particular, Joint CBM) when the neural network is three-layered linear. Squares \((x,c,y)\) are observed data and circles \((A,B)\) are learnable parameters (connection weights of linear neural network). Arrows mean that we set the conditional probability model from the left variable to the right one: \(p_{1}(y,c|A,B,x)=p_{12}(y|A,B,x)p_{11}(c|x,B)\), where \(p_{12}(y|A,B,x)\) is the statistical model of Standard and \(p_{11}(c|x,B)\) is a density function which satisfies \(p_{11}(c|B,x)\propto\exp\left(-\frac{\gamma}{2}\|c-Bx\|^{2}\right)\).
(b) This figure states the graphical model of Multitask in the case the neural network is three-layered linear. As same as the above, squares \((x,c,y)\) are observations, circles \((U,V)\) are learnable weights, and arrows correspond to conditional probability models, respectively. For the sake of ease to compare with CBM, we draw \(c\) and \(y\) as other squares; however, in fact, they are treated as an output since they are concatenated as a vector: \(p_{2}(z|U,V,x)\), where \(z=[y;c]\).
_and the RLCT of Multitask is defined by \(\lambda_{2}\), where the largest pole of \(\zeta_{2}(z)\) is \((-\lambda_{2})\) and its multiplicity \(m_{2}\) is defined as the order of the maximum pole._
Put an \(N\times N\) matrix \(\mathscr{X}=\left(\int x_{i}x_{j}q^{\prime}(x)dx\right)_{i=1,j=1}^{N,N}.\) Then, our main results are the following theorems.
**Theorem 3.1** (Main Theorem 1).: _Suppose \(\mathscr{X}\) is positive definite and \((A,B)\) is in a compact set. CBM is a regular statistical model in the case the network architecture is three-layered linear. Therefore, by using the input dimension \(N\), the number of concepts \(K\), and the output dimension \(M\), the RLCT of CBM \(\lambda_{1}\) and its multiplicity \(m_{1}\) are as follows:_
\[\lambda_{1}=\frac{1}{2}(M+N)K,\ m_{1}=1. \tag{28}\]
**Theorem 3.2** (Main Theorem 2).: _Suppose \(\mathscr{X}\) is positive definite and \((U,V)\) is in a compact set. Let \(\lambda_{2}\) be the RLCT of Multitask formulation of a three-layered linear neural network and \(m_{2}\) be its multiplicity. By using the input dimension \(N\), the number of concepts \(K\) and intermediate units \(H\), the output dimension \(M\), and the true rank \(H_{0}\), \(\lambda_{2}\) and \(m_{2}\) can be represented as below:_
1. _In the case of_ \(M+K+H_{0}\leqq N+H\) _and_ \(N+H_{0}\leqq M+K+H\) _and_ \(H+H_{0}\leqq N+M+K\)_,_ 1. _and if_ \(N+M+K+H+H_{0}\) _is even, then_ \[\lambda_{2}=\frac{1}{8}\{2(H+H_{0})(N+M+K)-(N-M-K)^{2}-(H+H_{0})^{2}\},\ m_{2}=1.\] 2. _and if_ \(N+M+K+H+H_{0}\) _is odd, then_ \[\lambda_{2}=\frac{1}{8}\{2(H+H_{0})(N+M+K)-(N-M-K)^{2}-(H+H_{0})^{2}+1\},\ m_{2}=2.\]
2. _In the case of_ \(N+H<M+K+H_{0}\)_, then_ \[\lambda_{2}=\frac{1}{2}\{HN+H_{0}(M+K-H)\},\ m_{2}=1.\]
3. _In the case of_ \(M+K+H<N+H_{0}\)_, then_ \[\lambda_{2}=\frac{1}{2}\{H(M+K)+H_{0}(N-H)\},\ m_{2}=1.\]
4. _Otherwise (i.e._ \(N+M+K<H+H_{0}\)_), then_ \[\lambda_{2}=\frac{1}{2}N(M+K),\ m_{2}=1.\]
These theorems yield the exact asymptotic forms of the expected Bayesian generalization error and the free energy following Eqs. (1) and (2). Their proofs are in B.
Theorem 3.1 shows that the concept bottleneck structure makes the neural network regular if the network architecture is three-layered linear. We present a sketch of proof below. Theorem 3.2 can be immediately proved using the RLCT of three-layered neural network [6], since in the model \(p_{2}(z|A,B,x)\), the input dimension, number of middle layer units, output dimension, and rank of the product of true parameter matrices are \(N\), \(H\), \(M+K\), and \(H_{0}\), respectively.
Sketch of Proof of Theorem 3.1.: Let a binary relation \(\sim\) be that the RLCTs and multiplicities of both sides are equal. The KL divergence from \(q_{1}\) to \(p_{1}\) can be developed as
\[K_{1}(A,B) \propto\iiint dydcdxq^{\prime}(x)q(y,c|x)\left(-\|y-A_{0}B_{0}x\|^ {2}-\gamma\|c-B_{0}x\|^{2}\right. \tag{30}\] \[\left.+\|y-ABx\|^{2}+\gamma\|c-Bx\|^{2}\right)\] \[\sim\|AB-A_{0}B_{0}\|^{2}+\|B-B_{0}\|^{2}. \tag{31}\]
To calculate \(\lambda_{1}\) and \(m_{1}\), we find \(K_{1}^{-1}(0)\). Put \(\|AB-A_{0}B_{0}\|^{2}+\|B-B_{0}\|^{2}=0\) and we have \((A,B)=(A_{0},B_{0})\). That means that \(K_{1}^{-1}(0)\) can be referred to a one point set; thus, CBM in the three-layered linear case is regular.
## 4 Expansion of Main Theorems
We define the RLCT of CBM for the observed noise as subject to a Gaussian distribution (cf. Definition 3.1 and 3.3). This corresponds to a regression from X-ray images to arthritis grades in the original CBM study [28]. However,we can generally treat CBM as classifier and concepts as categorical variables. For example, in [28], Koh et al. demonstrated bird species classification task with bird attributes concepts. Summarizing them, we have the following four cases:
1. Both \(p_{12}(y|A,B,x)\) and \(p_{11}(c|B,x)\) are Gaussian (regression task with real number concepts).
2. \(p_{12}(y|A,B,x)\) is Gaussian and \(p_{11}(c|B,x)\) is Bernoulli (regression task with categorical concepts).
3. \(p_{12}(y|A,B,x)\) is categorical and \(p_{11}(c|B,x)\) is Gaussian (classification task with real number concepts).
4. \(p_{12}(y|A,B,x)\) is categorical and \(p_{11}(c|B,x)\) is Bernoulli (classification task with categorical concepts).
Note that concepts are not exclusive; thus, the distribution of \(p_{11}(c|B,x)\) must be Bernoulli (not categorical). We prove that a result similar to that of Theorem 3.1 holds in the above cases. Before expanding our Main Theorems, we first define the sigmoid and softmax functions. Let \(\sigma_{K^{\prime}}:\mathbb{R}^{K^{\prime}}\rightarrow[0,1]^{K^{\prime}}\) be a \(K^{\prime}\)-dimensional multivariate sigmoid function and \(s_{M^{\prime}}:\mathbb{R}^{M^{\prime}}\rightarrow\Delta_{M^{\prime}}\) be a \(M^{\prime}\)-dimensional softmax function, respectively:
\[\sigma_{K^{\prime}}(u) =\left(\frac{1}{1+\exp(-u_{j})}\right)_{j=1}^{K^{\prime}},\ u\in \mathbb{R}^{K^{\prime}}, \tag{32}\] \[s_{M^{\prime}}(w) =\left(\frac{\exp(w_{j})}{\sum_{j=1}^{M^{\prime}}\exp(w_{j})} \right)_{j=1}^{M^{\prime}}\ w\in\mathbb{R}^{M^{\prime}}, \tag{33}\]
where \(\Delta_{M^{\prime}}\) is an \(M^{\prime}\)-dimensional simplex. Then, we can define each distribution as follows:
\[p_{12}^{1}(y|A,B,x) \propto\exp\left(-\frac{1}{2}\|y-ABx\|^{2}\right), \tag{34}\] \[p_{12}^{2}(y|A,B,x) =\prod_{j=1}^{M}(s_{M}(ABx))_{j}^{y_{j}},\] (35) \[p_{11}^{1}(c|B,x) \propto\exp\left(-\frac{\gamma}{2}\|c-Bx\|^{2}\right),\] (36) \[p_{11}^{2}(c|B,x) \propto\left(\prod_{k=1}^{K}(\sigma_{K}(Bx))_{k}^{c_{k}}(1-( \sigma_{K}(Bx))_{k})^{1-c_{k}}\right)^{\gamma}, \tag{37}\]
where \((s_{M}(ABx))_{j}\) and \((\sigma_{K}(Bx))_{k}\) are the \(j\)-th and \(k\)-th element of \(s_{M}(ABx)\) and \(\sigma_{K}(Bx)\), respectively. The data-generating distributions are denoted by \(q_{12}^{1}(y|x)=p_{12}^{1}(y|A_{0},B_{0},x)\), \(q_{12}^{2}(y|x)=p_{12}^{2}(y|A_{0},B_{0},x)\), \(q_{11}^{1}(c|x)=p_{11}^{1}(c|B_{0},x)\), and \(q_{11}^{2}(c|x)=p_{11}^{2}(c|B_{0},x)\), respectively. Based on the aforementioned points, we explain the semantics of the indexes in the density functions \({}_{kl}^{h}\) as follows. Superscripts \(h\in\{1,2\}\) denote the types of the response variable: real and categorical. For a double subscript \(kl\), \(k\in\{1,2\}\) denotes the models (CBM and Multitask) and \(l\in\{1,2\}\) does response variables (\(c\) and \(y\)). For a double superscript \(ij\), as used in Theorems 4.1 and 4.2, \(i\) and \(j\) mean the response variable \(y\) and \(c\), respectively. Then, Theorem 3.1 can be expanded as follows:
**Theorem 4.1** (Expansion of Theorem 3.1).: _Let_
\[p_{1}^{ij}(y,c|A,B,x) =p_{12}^{i}(y|A,B,x)p_{11}^{j}(c|B,x),\ i=1,2,\ j=1,2, \tag{38}\] \[q_{1}^{ij}(y,c|x) =p_{1}^{ij}(y,c|A_{0},B_{0},x),\ i=1,2,\ j=1,2. \tag{39}\]
_If we write the KL divergences as_
\[K_{1}^{ij}(A,B)=\iiint dydcdxq^{\prime}(x)q_{1}^{ij}(y,c|x)\log \frac{q_{1}^{ij}(y,c|x)}{p_{1}^{ij}(y,c|A,B,x)},\ i=1,2,\ j=1,2, \tag{40}\]
_where \(q^{\prime}(x)\) is the input-generating distribution (as same as Definition 3.3). Assume that \(\mathscr{X}\) is positive definite and \((A,B)\) is in a compact set. Then, the maximum pole \((-\lambda_{1}^{ij})\) and its order \(m_{1}^{ij}\) of the zeta function_
\[\zeta_{1}^{ij}(x)=\iint K_{1}^{ij}(A,B)^{z}dAdB \tag{41}\]
_are as follows: for \(i=1,2\) and \(j=1,2\),_
\[\lambda_{1}^{1j} =\frac{1}{2}(M+N)K, \tag{42}\] \[\lambda_{1}^{2j} =\frac{1}{2}(M+N-1)K,\] (43) \[m_{1}^{ij} =1. \tag{44}\]
Moreover, we expand our Main Theorem 3.2 for Multitask. In general, Multitask also has two task and concept types, same as above. The dimension of \(UVx\) is \(M+K\); hence, we can decompose the former \(M\)-dimensional part and the other (\(K\)-dimensional) as the same way of \(z=[y;c]\). We define this decomposition as \(UVx=[(UVx)^{\mathrm{y}};(UVx)^{\mathrm{c}}]\), where \((UVx)^{\mathrm{y}}=((UVx)_{h})_{h=1}^{M}\) and \((UVx)^{\mathrm{c}}=((UVx)_{h})_{h=M+1}^{M+K}\). Since one can easily show \(\|z-UVx\|^{2}=\|y-(UVx)^{\mathrm{y}}\|^{2}+\|c-(UVx)^{\mathrm{c}}\|^{2}\), the Multitask model \(p_{2}(z|U,V,x)\) can be decomposed as
\[p_{2}(z|U,V,x)=p_{22}(y|U,V,x)p_{21}(c|U,V,x), \tag{45}\]
where
\[p_{22}(y|U,V,x) \propto\exp\left(-\frac{1}{2}\|y-(UVx)^{\mathrm{y}}\|^{2}\right), \tag{46}\] \[p_{21}(c|U,V,x) \propto\exp\left(-\frac{1}{2}\|c-(UVx)^{\mathrm{c}}\|^{2}\right). \tag{47}\]
Similar to the case of CBM, we define each distribution as follows:
\[p_{22}^{1}(y|U,V,x) \propto\exp\left(-\frac{1}{2}\|y-(UVx)^{\mathrm{y}}\|^{2}\right), \tag{48}\] \[p_{22}^{2}(y|U,V,x) =\prod_{j=1}^{M}(s_{M}((UVx)^{\mathrm{y}}))_{j}^{y_{j}},\] (49) \[p_{21}^{1}(c|U,V,x) \propto\exp\left(-\frac{1}{2}\|c-(UVx)^{\mathrm{c}}\|^{2}\right),\] (50) \[p_{21}^{2}(c|U,V,x) =\prod_{k=1}^{K}(\sigma_{K}((UVx)^{\mathrm{c}}))_{k}^{c_{k}}(1-( (\sigma_{K}(UVx)^{\mathrm{c}}))_{k})^{1-c_{k}}. \tag{51}\]
The data-generating distributions are denoted by \(q_{22}^{1}(y|x)=p_{22}^{1}(y|U_{0},V_{0},x)\), \(q_{22}^{2}(y|x)=p_{22}^{2}(y|U_{0},V_{0},x)\), \(q_{21}^{1}(c|x)=p_{21}^{1}(c|U_{0},V_{0},x)\), and \(q_{21}^{2}(c|x)=p_{11}^{2}(c|U_{0},V_{0},x)\), respectively. Then Theorem 3.2 can be expanded as follows:
**Theorem 4.2** (Expansion of Theorem 3.2).: _Same as Theorem 4.1, the models and data-generating distributions can be expressed as_
\[p_{2}^{ij}(z|U,V,x) =p_{22}^{i}(y|U,V,x)p_{21}^{j}(c|U,V,x),\ i=1,2,\ j=1,2, \tag{52}\] \[q_{2}^{ij}(z|x) =p_{2}^{ij}(y,c|U_{0},V_{0},x),\ i=1,2,\ j=1,2. \tag{53}\]
_Further, the KL divergences can be expressed as_
\[K_{2}^{ij}(U,V)=\iiint dydcdxq^{\prime}(x)q_{2}^{ij}(y,c|x)\log\frac{q_{2}^{ij} (y,c|x)}{p_{2}^{ij}(y,c|U,V,x)},\ i=1,2,\ j=1,2, \tag{54}\]
_where \(q^{\prime}(x)\) is the input-generating distribution (as same as Definition 3.4). Assume that \(\mathscr{X}\) is positive definite and \((U,V)\) is in a compact set. \(\lambda_{2}\) and \(m_{2}\) denote the functions of \((N,H,M,K,H_{0})\) in Theorem 3.2. Then, the maximum pole \((-\lambda_{2}^{ij})\) and its order \(m_{2}^{ij}\) of the zeta function can be written as_
\[\zeta_{2}^{ij}(x)=\iint K_{2}^{ij}(U,V)^{z}dUdV, \tag{55}\]
_for \(i=1,2\) and \(j=1,2\), we have_
\[\lambda_{2}^{1j} =\lambda_{2}(N,H,M,K,H_{0}), \tag{56}\] \[\lambda_{2}^{2j} =\lambda_{2}(N,H,M-1,K,H_{0}),\] (57) \[m_{2}^{1j} =m_{2}(N,H,M,K,H_{0}),\] (58) \[m_{2}^{2j} =m_{2}(N,H,M-1,K,H_{0}). \tag{59}\]
We prove Theorems 4.1 and 4.2 in B. In addition, the above expanded theorems lead the following corollaries that consider the case (the composed case) the outputs or concepts are composed of both real numbers and categorical variables. Let \(y^{\mathrm{r}}\) and \(y^{\mathrm{c}}\) be the \(M^{\mathrm{r}}\)-dimensional real vector and \(M^{\mathrm{c}}\)-dimensional categorical variable, respectively. These are observed variables of outputs. Let \(c^{\mathrm{r}}\) and \(c^{\mathrm{c}}\) be the \(K^{\mathrm{r}}\)-dimensional real vector and \(K^{\mathrm{c}}\)-dimensional categorical variable, respectively. They serve as concepts that describe the outputs from \(N\)-dimensional inputs. In the same way of the definition of \(z=[y;c]\), put \(y=[y^{\mathrm{r}};y^{\mathrm{c}}]\) and \(c=[c^{\mathrm{r}};c^{\mathrm{c}}]\). Also, set \(M=M^{\mathrm{r}}+M^{\mathrm{c}}\) and \(K=K^{\mathrm{r}}+K^{\mathrm{c}}\), where \(M^{\mathrm{r}},M^{\mathrm{c}},K^{\mathrm{r}},K^{\mathrm{c}}\geqq 1\). Similarly, we have \(ABx=[(ABx)^{\mathrm{r}};(ABx)^{\mathrm{c}}]\) and \(Bx=[(Bx)^{\mathrm{r}};(Bx)^{\mathrm{c}}]\), where
\[(ABx)^{\mathrm{r}} =((ABx)_{h})_{h=1}^{M^{\mathrm{r}}},\ (ABx)^{\mathrm{c}}=((ABx)_{h})_{h=M^{ \mathrm{r}}+1}^{M}, \tag{60}\] \[(Bx)^{\mathrm{r}} =((Bx)_{h})_{h=1}^{K^{\mathrm{r}}},\ (Bx)^{\mathrm{c}}=((Bx)_{h})_{h=K^{ \mathrm{r}}+1}^{K}, \tag{61}\]
and \((ABx)_{h}\) and \((Bx)_{h}\) is the \(h\)-th entry of them, respectively. Even if the outputs and concepts are composed of both real numbers and categorical variables, using Theorems 4.1 and 4.2, we can immediately derive the RLCT \(\lambda_{1}^{\mathrm{com}}\) and its multiplicity \(m_{1}^{\mathrm{com}}\) as follows:
**Corollary 4.1** (RLCT of CBM in Composed Case).: _Let \(p_{1}^{\mathrm{com}}(y,c|A,B,x)\) be the statistical model of CBM in the composed case and \(p_{12}^{\mathrm{com}}(y|A,B,x)\) and \(p_{11}^{\mathrm{com}}(c|B,x)\) be the following probability distributions:_
\[p_{12}^{\mathrm{com}}(y|A,B,x) \propto\exp\left(-\frac{1}{2}\|y^{\mathrm{r}}-(ABx)^{\mathrm{r}} \|^{2}\right)\times\prod_{j=1}^{M^{\mathrm{c}}}(s_{M^{\mathrm{c}}}((ABx)^{ \mathrm{c}}))_{j}^{y_{j}}, \tag{62}\] \[p_{11}^{\mathrm{com}}(c|B,x) \propto\exp\left(-\frac{\gamma}{2}\|c^{\mathrm{r}}-(Bx)^{ \mathrm{r}}\|^{2}\right)\times\left(\prod_{k=1}^{K^{\mathrm{c}}}(\sigma_{K^{ \mathrm{c}}}((Bx)^{\mathrm{c}}))_{k}^{\mathrm{c}_{k}}(1-(\sigma_{K^{\mathrm{c }}}((Bx)^{\mathrm{c}}))_{k})^{1-c_{k}}\right)^{\gamma}. \tag{63}\]
_The data-generating distribution is denoted by_
\[q_{1}^{\mathrm{com}}(y,c|x)=p_{12}^{\mathrm{com}}(y|A_{0},B_{0},x)p_{11}^{ \mathrm{com}}(c|B_{0},x). \tag{64}\]
_The KL divergence can be expressed as_
\[K_{1}^{\rm com}(A,B)=\iiint dydcdxq^{\prime}(x)q_{1}^{\rm com}(y,c|x) \log\frac{q_{1}^{\rm com}(y,c|x)}{p_{1}^{\rm com}(y,c|A,B,x)}, \tag{65}\]
_where \(q^{\prime}(x)\) is the input-generating distribution (as same as Definition 3.3). Assume \(\mathscr{X}\) is positive definite and \((A,B)\) is in a compact set Then, the RLCT \(\lambda_{1}^{\rm com}\) and its multiplicity \(m_{1}^{\rm com}\) of \(K_{1}^{\rm com}\) can be expressed as follows:_
\[\lambda_{1}^{\rm com} =\frac{1}{2}(M^{\rm r}+M^{\rm c}+N-1)(K^{\rm r}+K^{\rm c}), \tag{66}\] \[m_{1}^{\rm com} =1. \tag{67}\]
This is because those concepts are decomposed as the real number part and the categorical one in the same way of correspondence between \(z=[y;c]\) and \(UVx=[(UVx)^{\rm r};(UVx)^{\rm c}]\). The composed case for Multitask is also easily determined as the following. Note that \((UVx)^{\rm r}\) and \((UVx)^{\rm c}\) are defined in the same way of \(ABx\).
**Corollary 4.2** (RLCT of Multitask in Composed Case).: _Let \(p_{2}^{\rm com}(y,c|U,V,x)\) be the statistical model of Multitask in the composed case and \(p_{22}^{\rm com}(y|U,V,x)\) and \(p_{21}^{\rm com}(c|U,V,x)\) be the following probability distributions:_
\[p_{22}^{\rm com}(y|U,V,x) \propto\exp\left(-\frac{1}{2}\|y^{\rm r}-(UVx)^{\rm r}\|^{2} \right)\times\prod_{j=1}^{M^{\rm c}}(s_{M^{\rm c}}((UVx)^{\rm c}))_{j}^{y_{j}}, \tag{68}\] \[p_{21}^{\rm com}(c|U,V,x) \propto\exp\left(-\frac{\gamma}{2}\|c^{\rm r}-(UVx)^{\rm r}\|^{2 }\right)\times\prod_{k=1}^{K^{\rm c}}(\sigma_{K^{\rm c}}((UVx)^{\rm c}))_{k}^{ \rm c}(1-(\sigma_{K^{\rm c}}((UVx)^{\rm c}))_{k})^{1-c_{k}}. \tag{69}\]
_The data-generating distribution is denoted by_
\[q_{2}^{\rm com}(y,c|x)=p_{22}^{\rm com}(y|U_{0},V_{0},x)p_{21}^{ \rm com}(c|U_{0},V_{0},x). \tag{70}\]
_Put the KL divergence as_
\[K_{2}^{\rm com}(U,V)=\iiint dydcdxq^{\prime}(x)q_{2}^{\rm com}(y,c|x)\log\frac{q_{2}^{\rm com}(y,c|x)}{p_{2}^{\rm com}(y,c|U,V,x)}, \tag{71}\]
_where \(q^{\prime}(x)\) is the input-generating distribution (as same as Definition 3.4). Assume \(\mathscr{X}\) is positive definite and \((U,V)\) is in a compact set. Then, the RLCT \(\lambda_{2}^{\rm com}\) and its multiplicity \(m_{2}^{\rm com}\) of \(K_{2}^{\rm com}\) are as the followings:_
\[\lambda_{2}^{\rm com} =\lambda_{2}(N,H,M^{\rm r}+M^{\rm c}-1,K^{\rm r}+K^{\rm c},H_{0}), \tag{72}\] \[m_{2}^{\rm com} =m_{2}(N,H,M^{\rm r}+M^{\rm c}-1,K^{\rm r}+K^{\rm c},H_{0}). \tag{73}\]
## 5 Discussion
In this paper, we described how the RLCTs of CBM and Multitask can be determined in the case of a three-layered linear neural network. Using these RLCTs and Eqs. (1) and (2), we also clarified the exact asymptotic forms of the Bayesian generalization error and the marginal likelihood in these models.
There are two limitations to this study. The first is that this article treats three-layered neural networks. If the input is an intermediate layer of a high accuracy neural network, this model freezes when learning the last full-connected linear layer. Thus, our result is valuable for the foundation of not only learning three-layered neural networks but also for transfer learning. In fact, from the perspective of feature extracting, instead of the original input, an intermediate layer of a state-of-the-art neural network can be used as an input to another model [11, 51, 68].
The second limitation is that our formulation of CBM for Bayesian inference is based on Joint CBM. There are two other types of CBM: Independent CBM and Sequential CBM [28]. In Independent CBM, functions \(x\mapsto c\) and \(c\mapsto y\) are independently learned. When the neural network is three-layered and linear, learning Independent CBM is equivalent to just estimating two independent linear transformation \(c=Bx\) and \(y=Ac\). The graphical model of Independent CBM is \(x\to B\to c\to A\to y\). Clearly, \((A,B)\) is identifiable and the model is regular. In contrast, Sequential CBM performs a two-step estimation. First, \(B\) is estimated as \(c=Bx\). Then, \(A\) is learned as \(y=A\hat{c}\), where \(\hat{B}\) is the estimator of \(B\) and \(\hat{c}=\hat{B}x\). Since \(\hat{c}\) is subject to a predictive distribution of \(c\) conditioned \(x\), its graphical model is the same as Joint CBM (Figure 1a). Aiming the point of two-step estimation, for Bayesian inference of \(A\), we set a prior of \(B\) as the posterior of \(B\) inferred by \(c=Bx\), i.e. the prior distribution depends on the data. If we ignored the point of two-step estimation, Bayesian inference of Sequential CBM would be that of Joint CBM. Singular learning theory with data-dependent prior distribution is challenging because that theory use the prior as a measure of an integral to characterize the RLCT and its multiplicity (see Proposition A.1). To resolve this issue, a new analysis method for Bayesian generalization error and free energy must be established.
Despite some of the above-mentioned limitations, through the contribution of this study, we can obtain a new perspective that CBM is a parameter-restricted model. According to the proof of Theorem 3.1, the concept bottleneck structure \(p_{11}(c|B,x)\) makes the neural network regular whereas Standard is singular. In other words, the concept bottleneck structure gives the constrain condition \(B=B_{0}\) for the analytic set \(K_{1}^{-1}(0)\) which we should consider for finding the RLCT. This structure is added to Standard for interpretability. Hence, in singular learning theory of interpretable models, Theorem 3.1 presents nontrivial results which the constrain of the parameter for explanation affects the behavior of generalization; this is the case in which the restriction for interpretability changes the model from singular to regular.
Finally, we discuss the model selection process for CBM and Multitask. Bothe these models use a similar dataset composed of inputs, concepts, and outputs. Additionally, they interpret the reason behind the predicted result using the observed concepts. In both these approaches, supervised learning is carried out from the inputs to the concepts and outputs. However, their model structures are different since CBM uses concepts for the middle layer units and Multitask does them to the outputs. How does this difference affect generalization performance and accuracy of knowledge discovery? We figure out that issue in the sense of Bayesian generalization error and free energy (negative log marginal likelihood). Figures 2a-2f show the behaviors of the RLCTs in CBM and Multitask when the number of concepts, i.e. \(K\), increases. In addition, Figures 3a-3f illustrate the instances when the number of intermediate layer units \(H\) increases. In both these figures, the RLCT of CBM is a straight line and that of Multitask is a contour (piecewise linear). For CBM, as mentioned in Definition 3.1, it is characterized by \((M,N,K)\) and the intermediate layer units is same as that of concepts in the network architecture. As an inevitable conclusion, the RLCT of CBM does not depend on \(H\) even if it uses the same pair \((y,c,x)\) as Multitask. For Multitask, according to [6], the RLCT of Standard is also similar contour as a graph of a function from \(H\) to the RLCT. This similarity is immediately derived from Theorem 3.2 and [6] (see also the proof of Theorem 3.2). Furthermore, we can determine the cross point between the RLCT curves of CBM and Multitask. The RLCT dominates the asymptotic forms of the Bayesian generalization error and the free energy. If the RLCT is greater, the Bayesian generalization error and the free energy also increases. Thus, if their theoretical behaviors are clarified, then issues pertaining to the selection of the data analysis method can be clarified. Hence, it is important for researchers and practitioners, for whom accuracy is paramount, to compare CBM and Multitask. Proposition 5.1 gives us
Figure 3: Behaviors of RLCTs of CBM and Multitask as controlled by \(H\). The vertical axis represents the value of the RLCT and the horizontal one does the number of intermediate layer units (hidden units) \(H\). The behaviors of the RLCT are visualized as graphs of functions of \(H\), where \(M=10\) and \(N=1\) are fixed and \(K\) and \(H_{0}\) are set as the subcaptions. The RLCT of CBM is drawn as dashed lines and that of Multitask is done as solid lines. The RLCT of CBM does not depend on \(H\); thus, it is a constant. The RLCT of Multitask depends on \(H\) as well as that of Standard as clarified in [6].
Figure 2: Behaviors of RLCTs of CBM and Multitask as controlled by \(K\). The vertical axis represents the value of the RLCT and the horizontal one does the number of concepts \(K\). The RLCT behaviors are visualized as graphs of functions of \(K\), where \(M=10\) and \(N=1\) are fixed and \(H\) and \(H_{0}\) are set as the subcaptions. The RLCT of CBM is drawn as dashed lines and that of Multitask as solid lines. They are significantly different since one is linear and the other is non-linear (piecewise linear). This is because the RLCT of Multitask is dependent of \(H\) and \(H_{0}\) but CBM is not.
change points for an issue: which has better performance for given \((M,H,N,K)\) and assumed \(H_{0}\). If \(\lambda_{1}>\lambda_{2}\), Multitask has better performance than CBM in the sense of the Bayesian generalization error and the free energy. If \(\lambda_{1}\leqq\lambda_{2}\), the opposite fact holds. The proof of Proposition 5.1 lies in B.
**Proposition 5.1** (Comparison the RLCTs of CBM and Multitask).: _Along with the conditional branch in Theorem 3.2, the magnitude of the RLCT of CBM \(\lambda_{1}\) and that of Multitask \(\lambda_{2}\) changes as the following._
1. _In the case_ \(M+K+H_{0}\leqq N+H\) _and_ \(N+H_{0}\leqq M+K+H\) _and_ \(H+H_{0}\leqq N+M+K\)_,_ 1. _and if_ \(N+M+K+H+H_{0}\) _is even, then_ \[\begin{cases}\lambda_{1}>\lambda_{2}&(K>H+H_{0}-(\sqrt{M}-\sqrt{N})^{2}),\\ \lambda_{1}\leqq\lambda_{2}&(\text{otherwise}).\end{cases}\] 2. _and if_ \(N+M+K+H+H_{0}\) _is odd, then_ \[\begin{cases}\lambda_{1}>\lambda_{2}&(K>H+H_{0}-M-N+\sqrt{4MN+1}),\\ \lambda_{1}\leqq\lambda_{2}&(\text{otherwise}).\end{cases}\]
2. _In the case_ \(N+H<M+K+H_{0}\)_, then_ \[\begin{cases}\lambda_{1}>\lambda_{2}&((M+N-H_{0})K>(N-H_{0})H+MH_{0}),\\ \lambda_{1}\leqq\lambda_{2}&(\text{otherwise}).\end{cases}\]
3. _In the case_ \(M+K+H<N+H_{0}\)_, then_ \[\begin{cases}\lambda_{1}>\lambda_{2}&((M+N-H_{0})K>(N-H)H_{0}+MH),\\ \lambda_{1}\leqq\lambda_{2}&(\text{otherwise}).\end{cases}\]
4. _Otherwise (i.e._ \(N+M+K<H+H_{0}\)_), then_ \[\begin{cases}\lambda_{1}>\lambda_{2}&(K>N),\\ \lambda_{1}\leqq\lambda_{2}&(\text{otherwise}).\end{cases}\]
## 6 Conclusion
We obtain the exact asymptotic behaviors of Bayesian generalization and free energy in neural networks with a concept bottleneck model and multitask formulation when the networks are three-layered linear. The behaviors are derived by finding the real log canonical thresholds of these models. The results show that concept bottleneck structure makes the neural network regular (identifiable) in the case of a three-layered and linear one. On the other hand, multitask formulation for a three-layered linear network only involves the addition of the concepts to the output; hence, the behaviors of the Bayesian generalization error and free energy are similar to that of the standard model. A future work would involve the theoretical analysis for multilayer and non-linear activation. Another would involve formulating Sequential CBM based on the singular learning theory. Clarifying numerical behaviors of Main Theorems can be yet another future research direction.
## Appendix A Singular Learning Theory
We briefly explain the relationship between Bayesian inference and algebraic geometry: in other words, the reason behind the need for resolution of singularity. This theory is referred to as the singular learning theory [55].
It is useful for the following analytic form [7] of the singularities resolution theorem [24] to treat \(K(w)\) in Eq. (6) and its zero points \(K^{-1}(0)\) in Eq. (7).
**Theorem A.1** (Hironaka, Atiyah).: _Let \(K\) be a non-negative analytic function on \(\mathcal{W}\subset\mathbb{R}^{d}\). Assume that \(K^{-1}(0)\) is not an empty set. Then, there are an open set \(\mathcal{W}^{\prime}\), a \(d\)-dimensional smooth manifold \(\mathcal{M}\), and an analytic map \(g:\mathcal{M}\to\mathcal{W}^{\prime}\) such that \(g:\mathcal{M}\setminus g^{-1}(K^{-1}(0))\to\mathcal{W}^{\prime}\setminus K^{-1 }(0)\) is isomorphic and_
\[K(g(u)) =u_{1}^{2k_{1}}\ldots u_{d}^{2k_{d}}, \tag{74}\] \[|\det g^{\prime}(u)| =b(u)|u_{1}^{h_{1}}\ldots u_{d}^{h_{d}}| \tag{75}\]
_hold for each local chart \(U\ni u\) of \(\mathcal{M}\), where \(k_{j}\) and \(h_{j}\) are non-negative integer for \(j=1,\ldots,d\), \(\det g^{\prime}(u)\) is the Jacobian of \(g\) and \(b:\mathcal{M}\to\mathbb{R}\) is strictly positive analytic: \(b(u)>0\)._
Atiyah has derived this form for analyzing the relationship between a division of distributions (a.k.a. hyperfunctions) and local type zeta functions [7]. By using Theorem A.1, the following is proved [7, 8, 48].
**Theorem A.2** (Atiyah, Bernstein, Sato and Shintani).: _Let \(K:\mathbb{R}^{d}\to\mathbb{R}\) be an analytic function of a variable \(w\in\mathcal{W}\). \(a:\mathcal{W}\to\mathbb{R}\) is denoted by a \(C^{\infty}\)-function with compact support \(\mathcal{W}\). The following univariate complex function_
\[\zeta(z)=\int_{\mathcal{W}}|K(w)|^{z}a(w)dw \tag{76}\]
_is a holomorphic function in \(\mathrm{Re}(z)>0\). Moreover, \(\zeta(z)\) can be analytically continued to a unique meromorphic function on the entire complex plane \(\mathbb{C}\). Its all poles are negative rational numbers._
Suppose the prior density \(\varphi(w)\) has the compact support \(\mathcal{W}\) and the open set \(\mathcal{W}^{\prime}\) satisfies \(\mathcal{W}\subset\mathcal{W}^{\prime}\). By using Theorem A.2, we can define a zeta function of learning theory.
**Definition A.1** (Zeta Function of Learning Theory).: _Let \(K(w)\geqq 0\) be the KL divergence mentioned in Eq. (6) and \(\varphi(w)\geqq 0\) be a prior density function which satisfies the above assumption. A zeta function of learning theory is defined by the following univariate complex function_
\[\zeta(z)=\int_{\mathcal{W}}K(w)^{z}\varphi(w)dw.\]
**Definition A.2** (Real Log Canonical Threshold).: _Let \(\zeta(z)\) be a zeta function of learning theory represented in Definition A.1. Consider an analytic continuation of \(\zeta(z)\) from Theorem A.2. A real log canonical threshold (RLCT) \(\lambda\) is defined by the negative maximum pole of \(\zeta(z)\) and its multiplicity \(m\) is defined by the order of the maximum pole:_
\[\zeta(z) =\frac{C(z)}{(z+\lambda)^{m}}\frac{C_{1}(z)}{(z+\lambda_{1})^{m_{ 1}}}\ldots\frac{C_{D}(z)}{(z+\lambda_{D})^{m_{D}}}\ldots, \tag{77}\] \[\qquad\lambda<\lambda_{k}\ (k=1,\ldots,D,\ldots), \tag{78}\]
_where \(C(z)\) and \(C_{k}(z)\ (k=1,\ldots,D,\ldots)\) are non-zero-valued complex functions._
Watanabe constructed the singular learning theory; he proved that the RLCT \(\lambda\) and the multiplicity \(m\) determine the asymptotic Bayesian generalization error and free energy [53, 54, 55]:
**Theorem A.3** (Watanabe).: \(\zeta(z)\) _is denoted by the zeta function of learning theory as Definition A.1. Let \(\lambda\) and \(m\) be the RLCT and the multiplicity defined by \(\zeta(z)\). The Bayesian generalization error \(G_{n}\) and the free energy \(F_{n}=-\log Z_{n}\) have the asymptotic forms (1) and (2) shown in section 1._
This theorem is rooted in Theorem A.1. That is why we need resolution of singularity to clarify the behavior of \(G_{n}\) and \(F_{n}\) via determination of the RLCT and its multiplicity.
Here, we describe how to determine the RLCT \(\lambda>0\) of the model corresponding to \(K(w)\). We apply Theorem A.1 to the zeta function of learning theory. Since we assumed the parameter space is compact, the
manifold in singularity resolution is also compact. Thus, the manifold can be covered by a union of \([0,1)^{d}\) for each local coordinate \(U\). Considering the partision of unity for \([0,1)^{d}\), we have
\[\zeta(z) =\int_{U}K(g(u))^{z}\varphi(g(u))|\det g^{\prime}(u)|du \tag{79}\] \[=\sum_{\eta}\int_{U}K(g(u))^{z}\varphi(g(u))|\det g^{\prime}(u)| \phi_{\eta}(u)du\] (80) \[=\sum_{\eta}\int_{[0,1]^{d}}u_{1}^{2k_{1}z+h_{1}}\ldots u_{d}^{2 k_{d}z+h_{d}}\varphi(g(u))b(u)\phi_{\eta}(u)du, \tag{81}\]
where \(\phi_{\eta}\) is the partision of unity: \(\operatorname{supp}(\phi_{\eta})=[0,1]^{d}\) and \(\phi_{\eta}(u)>0\) in \((0,1)^{d}\). The functions \(\varphi(g(u))\), \(b(u)\), and \(\phi_{\eta}\) are strictly positive in \((0,1)^{d}\); thus, we should consider the maximum pole of
\[\int_{[0,1]^{d}}u_{1}^{2k_{1}z+h_{1}}\ldots u_{d}^{2k_{d}z+h_{d}}du=\frac{1}{2 k_{1}z+h_{1}+1}\ldots\frac{1}{2k_{d}z+h_{d}+1}. \tag{82}\]
Allowing duplication, the set of the poles can be represented as follows:
\[\left\{\frac{h_{j}+1}{2k_{j}}\mid j=1,\ldots,d\right\}. \tag{83}\]
Thus, we can find the maximum pole \((-\lambda_{U})\) in the local chart \(U\) as follows
\[\lambda_{U}=\min_{j=1}^{d}\left\{\frac{h_{j}+1}{2k_{j}}\right\}. \tag{84}\]
By considering the duplication of indices, we can also find the multiplicity in \(U\) denoted as \(m_{U}\). Therefore, we can determine the RLCT as \(\lambda=\min_{U}\lambda_{U}\) and the multiplicity \(m\) as the order of the pole \((-\lambda)\), i.e. \(m=m\underline{U}\) where \(\underline{U}=\operatorname{argmin}_{U}\lambda_{U}\).
In addition, we explain a geometrical property of the RLCT as follows: a limit of a volume dimension [55, 62]:
**Proposition A.1**.: _Let \(V:(0,\infty)\to(0,\infty)\), \(t\mapsto V(t)\) be a volume of \(K^{-1}((0,t))\) measured by \(\varphi(w)dw\):_
\[V(t)=\int_{K(w)<t}\varphi(w)dw. \tag{85}\]
_Then, the RLCT \(\lambda\) satisfies the following:_
\[\lambda=\lim_{t\to+0}\frac{\log V(t)}{\log t}. \tag{86}\]
The RLCT and its multiplicity are birational invariants of an analytic set \(K^{-1}(0)\). Since they are birational invariants, they do not depend on the resolution of singularity. The above property characterizes that fact.
To determine \(\lambda\) and \(m\), we should consider the resolution of singularity [24] for concrete varieties corresponding to the models. We should calculate theoretical values of RLCTs to a family of functions to clarify a learning coefficient of a singular statistical model; however, there exists no standard method finding RLCTs to a given collection of functions. Thus, we need different procedures for RLCT of each statistical model. In fact, as mentioned in section 1, RLCTs of several models has been analyzed in both statistics and machine learning fields for each cases. Our work for CBM and Multitask contributes the body of knowledge in learning theory: clarifying RLCT of singular statistical model. Value of such studies in the practical perspective is introduced in section 1.
Proofs of Claims
Let \(\sim\) be a binomial relation whose both hand sides have the same RLCT and multiplicity, and \(\mathrm{M}(M,N)\) be the set of \(M\times N\) real matrices. Then, we define the following utility.
**Definition B.1** (Rows Extractor).: _Let \((\cdot)_{<d}:\mathrm{M}(I,J)\to\mathrm{M}(d-1,J)\), \(2\leqq d\leqq I\), \(1\leqq J\) be an operator for a matrix to extract the following submatrix:_
\[(W)_{<d}=\begin{pmatrix}w_{11}&\ldots&w_{1J}\\ \vdots&\ddots&\vdots\\ w_{(d-1)1}&\ldots&w_{(d-1)J}\end{pmatrix},\ W=(w_{ij})_{i=1,j=1}^{I,J},\ W\in \mathrm{M}(I,J). \tag{87}\]
We use this operator for a vector \(w\in\mathbb{R}^{M}\) as \(\mathbb{R}^{M}\cong\mathrm{M}(M,1)\); we refer to this as a column vector. In the same way as above, we define \((\cdot)_{>d}\), \((\cdot)_{\leqq d}\), and \((\cdot)_{\geqq d}\), where inequalities correspond to the row index. Also, \((\cdot)_{\neq d}\) and \((\cdot)_{=d}\) are defined as \([(\cdot)_{<d};(\cdot)_{>d}]\) and \(((\cdot)_{\leqq d})_{\geqq d}\), respectively. We can immediately show \((WV)_{<d}=(W)_{<d}V\) for \(W\in\mathrm{M}(I,R)\) and \(V\in\mathrm{M}(R,J)\), where \(R\geqq 1\). In addition, other relations in the subscript satisfy the above rule; we can derive \((WV)_{>d}=(W)_{>d}V\), \((WV)_{\leqq d}=(W)_{\leqq d}V\), \((WV)_{\geqq d}=(W)_{\geqq d}V\), \((WV)_{\neqd}=(W)_{\neqd}V\), and \((WV)_{=d}=(W)_{=d}V\). Moreover, to prove our theorems, we use the following lemmas [55, 6, 37].
**Lemma B.1**.: _Let \(f_{1}:\mathcal{W}\to\mathbb{R}\) and \(f_{2}:\mathcal{W}\to\mathbb{R}\) be non-negative analytic functions. If there are positive constants \(\alpha_{1},\alpha_{2}>0\) such that_
\[\alpha_{1}f_{1}(w)\leqq f_{2}(w)\leqq\alpha_{2}f_{1}(w) \tag{88}\]
_on the neighborhood of \(f_{2}^{-1}(0)\), then \(f_{1}\sim f_{2}\)._
**Lemma B.2**.: _Let \(K:\mathrm{M}(I,J)\to\mathbb{R}\), \(W\mapsto K(W)\) be_
\[K(W)=\int dxq^{\prime}(x)\|(W-W_{0})x\|^{2}, \tag{89}\]
_where \(W_{0}\in\mathrm{M}(I,J)\). Put \(\Phi:\mathrm{M}(I,J)\to\mathbb{R}\), \(W\mapsto\Phi(W)=\|W-W_{0}\|^{2}\). A symmetric \(J\times J\) matrix whose \((i,j)\)-entry is \(\int x_{i}x_{j}q^{\prime}(x)dx\) for \(1\leqq i\leqq N\) and \(1\leqq j\leqq N\) is denoted by \(\mathscr{X}\). If \(\mathscr{X}\) is positive definite, then there exist positive constants \(\alpha_{1},\alpha_{2}>0\) such that \(\alpha_{1}\Phi(W)\leqq K(W)\leqq\alpha_{2}\Phi(W)\) holds on a neighborhood of \(K^{-1}(0)\). Hence, \(K\sim\Phi\)._
**Lemma B.3**.: _Let \(K:\Delta_{d}\to\mathbb{R}\), \(w\mapsto K(w)\) be_
\[K(w)=\sum_{y\in\mathrm{Onehot}(d)}\prod_{j=1}^{d}(w_{j}^{0})^{y_{j}}\log\frac {\prod_{j=1}^{d}(w_{j}^{0})^{y_{j}}}{\prod_{j=1}^{d}(w_{j})^{y_{j}}}, \tag{90}\]
_where \(w_{0}\) in the interior of \(\Delta_{d}\), \(\Delta_{d}\) is the \(d\)-dimensional simplex, i.e. \(w=(w_{j})_{j=1}^{d}\in\Delta_{d}\Rightarrow\sum_{j=1}^{d}w_{j}=1,w_{j}\geqq 0)\), and \(\mathrm{Onehot}(d)\) is the set of \(d\)-dimensional onehot vectors_
\[\mathrm{Onehot}(d)=\{y\in\{0,1\}^{d}\mid y_{j}=1,y_{l}=0\ (l\neq j),\mathrm{for}\ j=1, \ldots,d\}. \tag{91}\]
_Put \(\Phi:\Delta_{d}\to\mathbb{R}\), \(w\mapsto\Phi(w)=\sum_{j=1}^{d-1}(w_{j}-w_{j}^{0})^{2}\). There are positive constants \(\alpha_{1},\alpha_{2}>0\) such that \(\alpha_{1}\Phi(w)\leqq K(w)\leqq\alpha_{2}\Phi(w)\) holds on a neighborhood of \(K^{-1}(0)\). Hence, \(K\sim\Phi\)._
Lemma B.1 was proved in [55]. Lemma B.2 was proved in [6]. Lemma B.3 was proved in [37]. Also, by using Lemma B.3 in the case \(d=2\), the following lemma can be derived.
**Lemma B.4**.: _Let \(K:[0,1]^{R}\to\mathbb{R}\), \(w\mapsto K(w)\) be_
\[K(w)=\sum_{c=(c_{k})_{k=1}^{R}\in\{0,1\}^{R}}\prod_{l=1}^{R}(w_{l}^{0})^{c_{l}}( 1-w_{l}^{0})^{1-c_{l}}\log\frac{\prod_{k=1}^{R}(w_{k}^{0})^{c_{k}}(1-w_{k}^{0})^ {1-c_{k}}}{\prod_{k=1}^{R}(w_{k})^{c_{k}}(1-w_{k})^{1-c_{k}}}, \tag{92}\]
_where \(w_{0}\) in the interior of \([0,1]^{R}\). Put \(\Phi:[0,1]^{R}\to\mathbb{R}\), \(w\mapsto\Phi(w)=\|w-w_{0}\|^{2}\). There are positive constants \(\alpha_{1},\alpha_{2}>0\) such that \(\alpha_{1}\Phi(w)\leqq K(w)\leqq\alpha_{2}\Phi(w)\) holds on a neighborhood of \(K^{-1}(0)\). Hence, \(K\sim\Phi\)._
Proof of Lemma B.4.: In the case of \(R=1\), this lemma is equivalent to Lemma B.3.
In the case of \(R\geqq 2\), developing \(K(w)\), we have
\[K(w) =\sum_{c=(c_{k})_{k=1}^{R}\in\{0,1\}^{R}}\left\{\prod_{l=1}^{R}(w _{l}^{0})^{c_{l}}(1-w_{l}^{0})^{1-c_{l}}\times\right. \tag{93}\] \[\left.\sum_{k=1}^{R}\left(\log(w_{k}^{0})^{c_{k}}(1-w_{k}^{0})^{1 -c_{k}}-\log(w_{k})^{c_{k}}(1-w_{k})^{1-c_{k}}\right)\right\}\] (94) \[=\sum_{c\in\{0,1\}^{R}}\prod_{l=1}^{R}(w_{l}^{0})^{c_{l}}(1-w_{l }^{0})^{1-c_{l}}\sum_{k=1}^{R}\log\frac{(w_{k}^{0})^{c_{k}}(1-w_{k}^{0})^{1-c_ {k}}}{(w_{k})^{c_{k}}(1-w_{k})^{1-c_{k}}}\] (95) \[=\sum_{k=1}^{R}\sum_{c\in\{0,1\}^{R}}\prod_{l=1}^{R}(w_{l}^{0})^{ c_{l}}(1-w_{l}^{0})^{1-c_{l}}\log\frac{(w_{k}^{0})^{c_{k}}(1-w_{k}^{0})^{1-c_ {k}}}{(w_{k})^{c_{k}}(1-w_{k})^{1-c_{k}}}. \tag{96}\]
Fix an arbitrary \(k\in\{1,\ldots,R\}\). If \(l\neq k\), the expectation by the \(l\)-th Bernoulli distribution \((w_{l}^{0})^{c_{l}}(1-w_{l}^{0})^{1-c_{l}}\) does not affect to the \(k\)-th log mass ratio:
\[\sum_{c_{l}=0}^{1}(w_{l}^{0})^{c_{l}}(1-w_{l}^{0})^{1-c_{l}}\log\frac{(w_{k}^{ 0})^{c_{k}}(1-w_{k}^{0})^{1-c_{k}}}{(w_{k})^{c_{k}}(1-w_{k})^{1-c_{k}}}=\log \frac{(w_{k}^{0})^{c_{k}}(1-w_{k}^{0})^{1-c_{k}}}{(w_{k})^{c_{k}}(1-w_{k})^{1- c_{k}}}. \tag{97}\]
This leads to the following:
\[K(w)=\sum_{k=1}^{R}\sum_{c_{k}=0}^{1}(w_{k}^{0})^{c_{k}}(1-w_{k}^{0})^{1-c_{k} }\log\frac{(w_{k}^{0})^{c_{k}}(1-w_{k}^{0})^{1-c_{k}}}{(w_{k})^{c_{k}}(1-w_{k}) ^{1-c_{k}}}. \tag{98}\]
Now, let \(v(k)=(w_{k},1-w_{k})\in\Delta_{2}\), \(v^{0}(k)=(w_{k}^{0},1-w_{k}^{0})\in\Delta_{2}\), and \(C(k)=(c_{k},1-c_{k})\in\text{Onehot}(2)\). Then, we have
\[(w_{k})^{c_{k}}(1-w_{k})^{1-c_{k}} =\prod_{j=1}^{2}(v(k))_{=j}^{(C(k))=j}, \tag{99}\] \[(w_{k}^{0})^{c_{k}}(1-w_{k}^{0})^{1-c_{k}} =\prod_{j=1}^{2}(v^{0}(k))_{=j}^{(C(k))=j}. \tag{100}\]
For simplicity, we write \((v(k))_{=j}\) and \((C(k))_{=j}\) as \(v(k)_{j}\) and \(C(k)_{j}\), respectively. Since \(C(k)\in\text{Onehot}(2)\) and \(v(k),v^{0}(k)\in\Delta_{2}\), we obtain
\[K(w) =\sum_{k=1}^{R}\sum_{c_{k}=0}^{1}\prod_{j=1}^{2}v^{0}(k)_{j}^{C(k)_ {j}}\log\frac{\prod_{j=1}^{2}v^{0}(k)_{j}^{C(k)_{j}}}{\prod_{j=1}^{2}v(k)_{j}^{C (k)_{j}}} \tag{101}\] \[=\sum_{k=1}^{R}\sum_{C(k)\in\text{Onehot}(2)}\prod_{j=1}^{2}v^{0} (k)_{j}^{C(k)_{j}}\log\frac{\prod_{j=1}^{2}v^{0}(k)_{j}^{C(k)_{j}}}{\prod_{j=1}^{2 }v(k)_{j}^{C(k)_{j}}}. \tag{102}\]
Put
\[\psi(v(k))=\sum_{C(k)\in\mathrm{Onehot}(2)}\prod_{j=1}^{2}v^{0}(k)_{j}^{C(k)_{j}} \log\frac{\prod_{j=1}^{2}v^{0}(k)_{j}^{C(k)_{j}}}{\prod_{j=1}^{2}v(k)_{j}^{C(k) _{j}}}. \tag{103}\]
Applying Lemma B.3, there exist \(R\)-dimensional vectors \(\alpha^{1}=(\alpha^{1}_{k})_{k=1}^{R}\) and \(\alpha^{2}=(\alpha^{2}_{k})_{k=1}^{R}\) whose entries are positive constants such that
\[\alpha^{1}_{k}(w_{k}-w_{k}^{0})^{2}\leqq\psi(v(k))\leqq\alpha^{2}_{k}(w_{k}-w_ {k}^{0})^{2}, \tag{104}\]
on a neighborhood of \(\psi^{-1}(0)\) for \(k=1,\ldots,R\). Summarizing them, we get
\[\sum_{k=1}^{R}\alpha^{1}_{k}(w_{k}-w_{k}^{0})^{2}\leqq\sum_{k=1}^{R}\psi(v(k)) \leqq\sum_{k=1}^{R}\alpha^{2}_{k}(w_{k}-w_{k}^{0})^{2}. \tag{105}\]
Because of \(\Phi(w)=\sum_{k=1}^{R}(w_{k}-w_{k}^{0})^{2}\), we have
\[\min_{k}\{\alpha^{1}_{k}\}\Phi(w)\leqq K(w)\leqq\max_{k}\{\alpha^{2}_{k}\}\Phi (w). \tag{106}\]
Therefore, \(K\sim\Phi\).
The above lemmas indicate that equivalent discrepancies have same RLCTs and examples of such discrepancies are the KL divergences between Gaussian, categorical, and Bernoulli distributions.
Here, we prove Theorem 3.1.
Proof of Theorem 3.1.: By using
\[\|y-ABx\|^{2} =\langle y-ABx,y-ABx\rangle \tag{107}\] \[=\|y\|^{2}-2\langle y,ABx\rangle+\|ABx\|^{2} \tag{108}\]
and that of \(y-A_{0}B_{0}x\), \(c-Bx\), and \(c-B_{0}x\), we expand \(\log q_{1}/p_{1}\) as follows:
\[\log\frac{q_{1}(y,c|x)}{p_{1}(y,c|A,B,x)} =\log\frac{\exp\left(-\frac{1}{2}\|y-A_{0}B_{0}x\|^{2}\right)\exp \left(-\frac{\gamma}{2}\|c-B_{0}x\|^{2}\right)}{\exp\left(-\frac{1}{2}\|y-ABx \|^{2}\right)\exp\left(-\frac{\gamma}{2}\|c-Bx\|^{2}\right)} \tag{109}\] \[=-\frac{1}{2}(\|y\|^{2}-2\langle y,A_{0}B_{0}x\rangle+\|A_{0}B_{ 0}x\|^{2})-\frac{\gamma}{2}(\|c\|^{2}-2\langle c,B_{0}x\rangle+\|B_{0}x\|^{2})\] (110) \[\quad+\frac{1}{2}(\|y\|^{2}-2\langle y,ABx\rangle+\|ABx\|^{2})+ \frac{\gamma}{2}(\|c\|^{2}-2\langle c,Bx\rangle+\|Bx\|^{2})\] (111) \[=\frac{1}{2}(\|ABx\|^{2}-2\langle y,(AB-A_{0}B_{0})x\rangle-\|A_ {0}B_{0}x\|^{2})\] (112) \[\quad+\frac{\gamma}{2}(\|Bx\|^{2}-2\langle c,(B-B_{0})x\rangle-\| B_{0}x\|^{2}). \tag{113}\]
Averaging by \(q_{1}(y,c|x)\), we have
\[\iint dcdyq_{1}(y,c|x)\log\frac{q_{1}(y,c|x)}{p_{1}(y,c|A,B,x)} =\frac{1}{2}(\|ABx\|^{2}-2\langle A_{0}B_{0}x,(AB-A_{0}B_{0})x \rangle-\|A_{0}B_{0}x\|^{2}) \tag{114}\] \[\quad+\frac{\gamma}{2}(\|Bx\|^{2}-2\langle B_{0}x,(B-B_{0})x \rangle-\|B_{0}x\|^{2})\] (115) \[=\frac{1}{2}(\|ABx\|^{2}-2\langle A_{0}B_{0}x,ABx\rangle+\|A_{0} B_{0}x\|^{2})\] (116) \[\quad+\frac{\gamma}{2}(\|Bx\|^{2}-2\langle B_{0}x,Bx\rangle+\|B_{ 0}x\|^{2})\] (117) \[=\frac{1}{2}(\|(AB-A_{0}B_{0})x\|^{2}+\gamma\|(B-B_{0})x\|^{2}). \tag{118}\]
Let \(\Psi_{1}(A,B)=(1/2)\int dxq^{\prime}(x)\|(AB-A_{0}B_{0})x\|^{2}\) and \(\Psi_{2}(B)=(1/2)\int dxq^{\prime}(x)\|(B-B_{0})x\|^{2}\). Because of Lemma B.2, there are positive constants \(c_{1},c_{2},c_{3},c_{4}>0\) such that
\[c_{1}\|AB-A_{0}B_{0}\|^{2}\leqq\Psi_{1}(A,B)\leqq c_{2}\|AB-A_{0} B_{0}\|^{2}, \tag{119}\] \[c_{3}\|B-B_{0}\|^{2}\leqq\gamma\Psi_{2}(B)\leqq c_{4}\|B-B_{0}\|^ {2}. \tag{120}\]
Thus, by adding Eq. (119) to Eq. (120), we have
\[c_{1}\|AB-A_{0}B_{0}\|^{2}+c_{3}\|B-B_{0}\|^{2}\leqq\Psi_{1}(A,B)+ \gamma\Psi_{2}(B)\leqq c_{2}\|AB-A_{0}B_{0}\|^{2}+c_{4}\|B-B_{0}\|^{2}. \tag{121}\]
Let \(C_{1}\) and \(C_{2}\) be \(\min\{c_{1},c_{3}\}\) and \(\max\{c_{2},c_{4}\}\), respectively. We immediately obtain
\[C_{1}(\|AB-A_{0}B_{0}\|^{2}+\|B-B_{0}\|^{2})\leqq\Psi_{1}(A,B)+ \gamma\Psi_{2}(B)\leqq C_{2}(\|AB-A_{0}B_{0}\|^{2}+\|B-B_{0}\|^{2}). \tag{122}\]
Applying Lemma B.1 to the above inequality, we have
\[\Psi_{1}(A,B)+\gamma\Psi_{2}(B)\sim\|AB-A_{0}B_{0}\|^{2}+\|B-B_{0}\|^{2}. \tag{123}\]
Therefore,
\[K_{1}(A,B) =\iiint\,dxdcdydq^{\prime}(x)q_{1}(y,c|x)\log\frac{q_{1}(y,c|x)}{ p_{1}(y,c|A,B,x)} \tag{124}\] \[=\Psi_{1}(A,B)+\gamma\Psi_{2}(B)\] (125) \[\sim\|AB-A_{0}B_{0}\|^{2}+\|B-B_{0}\|^{2}. \tag{126}\]
To determine the RLCT \(\lambda_{1}\) and its multiplicity \(m_{1}\), we should consider the following analytic set
\[\mathcal{V}_{1}=\{(A,B)\mid\|AB-A_{0}B_{0}\|^{2}+\|B-B_{0}\|^{2}=0,A \in\mathrm{M}(M,K)\text{ and }B\in\mathrm{M}(K,N)\}. \tag{127}\]
We take \(\|AB-A_{0}B_{0}\|^{2}+\|B-B_{0}\|^{2}=0\). Because \(\|AB-A_{0}B_{0}\|^{2}\geqq 0\) and \(\|B-B_{0}\|^{2}\geqq 0\) hold, we have
\[\|AB-A_{0}B_{0}\|^{2}+\|B-B_{0}\|^{2}=0\Leftrightarrow\|AB-A_{0}B_{0}\|^{2}=0 \text{ and }\|B-B_{0}\|^{2}=0. \tag{128}\]
Hence, \(AB=A_{0}B_{0}\) and \(B=B_{0}\), i.e. \((A,B)=(A_{0},B_{0})\). Therefore, \(\mathcal{V}_{1}=\{(A_{0},B_{0})\}\). This means that there is no singularity, i.e. the model is regular. Hence, the RLCT is equal to a half of the parameter dimension [55]. Since the parameter dimension equals \((M+N)K\), we have
\[\lambda_{1}=\frac{1}{2}(M+N)K,\ m_{1}=1. \tag{129}\]
Next, we prove Theorem 3.2. This is immediately derived using the Aoyagi's theorem [6] as follows.
**Theorem B.1** (Aoyagi and Watanabe).: _Suppose \(\mathscr{X}\) is positive definite. Let \(\lambda_{3}\) be the RLCT of Standard of three-layered linear neural network and \(m_{3}\) be its multiplicity. By using the input dimension \(N\), the number of intermediate units \(H\), the output dimension \(M\), and the true rank \(H_{0}\), they can be represented as follows:_
1. _In the case_ \(M+H_{0}\leqq N+H\) _and_ \(N+H_{0}\leqq M+H\) _and_ \(H+H_{0}\leqq N+M\)_,_ 1. _and if_ \(N+M+H+H_{0}\) _is even, then_ \[\lambda_{3}=\frac{1}{8}\{2(H+H_{0})(N+M)-(N-M)^{2}-(H+H_{0})^{2}\},\ m_{3}=1.\] 2. _and if_ \(N+M+K+H+H_{0}\) _is odd, then_ \[\lambda_{3}=\frac{1}{8}\{2(H+H_{0})(N+M)-(N-M)^{2}-(H+H_{0})^{2}+1\},\ m_{3}=2.\]
2. _In the case_ \(N+H<M+H_{0}\)_, then_ \[\lambda_{3}=\frac{1}{2}\{HN+H_{0}(M-H)\},\ m_{3}=1.\]
3. _In the case_ \(M+H<N+H_{0}\)_, then_ \[\lambda_{3}=\frac{1}{2}\{HM+H_{0}(N-H)\},\ m_{3}=1.\]
4. _Otherwise (i.e._ \(N+M<H+H_{0}\)_), then_ \[\lambda_{3}=\frac{1}{2}NM,\ m_{3}=1.\]
Proof of Theoem 3.2.: In the case of Multitask, the output dimension is expanded from \(M\) to \(M+K\) since Multitask makes the output and the concept co-occur to derive the explanation. Mathematically, this involves simply increasing the dimension. Therefore, by plug-inning \(M+K\) to \(M\) in Theorem B.1, we obtain Theorem 3.2.
We expand the main results to the case mentioned in section 4.
Proof of Theorem 4.1.: Put \(u=(u_{j})_{j=1}^{M}=s_{M}(ABx)\), \(u_{0}=(u_{j}^{0})_{j=1}^{M}=s_{M}(A_{0}B_{0}x)\), \(v=(v_{k})_{k=1}^{K}=\sigma_{K}(Bx)\), and \(v_{0}=(v_{k}^{0})_{k=1}^{K}=\sigma_{K}(B_{0}x)\).
(1) In the case when \(i=1\) and \(j=1\), this is Theorem 3.1.
(2) In the case when \(i=1\) and \(j=2\), we expand \(\log q_{1}^{12}/p_{1}^{12}\) as the following:
\[\log\frac{q_{1}^{12}(y,c|x)}{p_{1}^{12}(y,c|A,B,x)} =\log\frac{\exp\left(-\frac{1}{2}\|y-A_{0}B_{0}x\|^{2}\right) \left(\prod_{k=1}^{K}(\sigma_{K}(B_{0}x))_{k}^{c_{k}}(1-(\sigma_{K}(B_{0}x))_{ k})^{1-c_{k}}\right)^{\gamma}}{\exp\left(-\frac{1}{2}\|y-ABx\|^{2}\right) \left(\prod_{k=1}^{K}(\sigma_{K}(Bx))_{k}^{c_{k}}(1-(\sigma_{K}(Bx))_{k})^{1- c_{k}}\right)^{\gamma}} \tag{130}\] \[=-\frac{1}{2}(\|y\|^{2}-2\langle y,A_{0}B_{0}x\rangle+\|A_{0}B_{0 }x\|^{2})+\gamma\log\prod_{k=1}^{K}(v_{k}^{0})^{c_{k}}(1-v_{k}^{0})^{1-c_{k}}\] (131) \[\quad+\frac{1}{2}(\|y\|^{2}-2\langle y,ABx\rangle+\|ABx\|^{2})- \gamma\log\prod_{k=1}^{K}(v_{k})^{c_{k}}(1-v_{k})^{1-c_{k}}\] (132) \[=\frac{1}{2}(\|ABx\|^{2}-2\langle y,(AB-A_{0}B_{0})x\rangle-\|A_{ 0}B_{0}x\|^{2})\] (133) \[\quad+\gamma\log\frac{\prod_{k=1}^{K}(v_{k}^{0})^{c_{k}}(1-v_{k}^ {0})^{1-c_{k}}}{\prod_{k=1}^{K}(v_{k})^{c_{k}}(1-v_{k})^{1-c_{k}}} \tag{134}\]
Integrating by \(dydcq_{1}^{12}(y,c|x)\), we have
\[\iint dydcq_{1}^{12}(y,c|x)\log\frac{q_{1}^{12}(y,c|x)}{p_{1}^{1 2}(y,c|A,B,x)} \tag{135}\] \[=\frac{1}{2}\|(AB-A_{0}B_{0})x\|^{2}+\gamma\sum_{c=(c_{k})_{k=1}^ {K}\in\{0,1\}^{K}}\prod_{l=1}^{K}(v_{l}^{0})^{c_{l}}(1-v_{l}^{0})^{1-c_{l}} \log\frac{\prod_{k=1}^{K}(v_{k}^{0})^{c_{k}}(1-v_{k}^{0})^{1-c_{k}}}{\prod_{k= 1}^{K}(v_{k})^{c_{k}}(1-v_{k})^{1-c_{k}}}. \tag{136}\]
According to Lemma B.4, the second term \(\psi(v)\) is evaluated by \(\|v-v_{0}\|^{2}\) as shown below; there are positive constants \(\alpha_{1},\alpha_{2}>0\) such that
\[\alpha_{1}\|v-v_{0}\|^{2}\leqq\psi(v)\leqq\alpha_{2}\|v-v_{0}\|^{2}, \tag{137}\]
where
\[\psi(v)=\gamma\sum_{c=(c_{k})_{k=1}^{K}\in\{0,1\}^{K}}\prod_{k=1}^{K}(v_{k}^{0})^{ c_{k}}(1-v_{k}^{0})^{1-c_{k}}\log\frac{\prod_{k=1}^{K}(v_{k}^{0})^{c_{k}}(1-v_{k}^{0 })^{1-c_{k}}}{\prod_{k=1}^{K}(v_{k})^{c_{k}}(1-v_{k})^{1-c_{k}}}. \tag{138}\]
Let \(\beta_{1}=\min\{1,\alpha_{1}\}\) and \(\beta_{2}=\max\{1,\alpha_{2}\}\). Because \(\|(AB-A_{0}B_{0})x\|^{2}/2\geqq 0\) holds, adding it to the both sides, we have
\[\beta_{1}\left(\frac{1}{2}\|(AB-A_{0}B_{0})x\|^{2}+\|v-v_{0}\|^{2 }\right) \leqq\frac{1}{2}\|(AB-A_{0}B_{0})x\|^{2}+\psi(v) \tag{139}\] \[\leqq\beta_{2}\left(\frac{1}{2}\|(AB-A_{0}B_{0})x\|^{2}+\|v-v_{0} \|^{2}\right), \tag{140}\]
Thus, we should consider \(\|(AB-A_{0}B_{0})x\|^{2}/2+\|v-v_{0}\|^{2}\). With \(v=\sigma_{K}(Bx)\) and \(v_{0}=\sigma_{K}(B_{0}x)\), we have
\[\frac{1}{2}\|(AB-A_{0}B_{0})x\|^{2}+\|v-v_{0}\|^{2}=\frac{1}{2}\|(AB-A_{0}B_{0 })x\|^{2}+\|\sigma_{K}(Bx)-\sigma_{K}(B_{0}x)\|^{2}. \tag{141}\]
On account of Lemma B.2, the average of the first term by \(dxq^{\prime}(x)\) is equivalent to \(\|AB-A_{0}B_{0}\|^{2}\). On the other hand, since \(\sigma_{K}\) is analytic isomorphic onto its image and does not have parameters, the averaged second term has the same RLCT of linear regression \(Bx-B_{0}x\) (both models are regular), i.e.
\[\int dxq^{\prime}(x)\|\sigma_{K}(Bx)-\sigma_{K}(B_{0}x)\|^{2} \sim\int dxq^{\prime}(x)\|(B-B_{0})x\|^{2} \tag{142}\] \[\sim\|B-B_{0}\|^{2}. \tag{143}\]
Hence,
\[\iiint dxdydcq^{\prime}(x)q_{1}^{12}(y,c|x)\log\frac{q_{1}^{12}(y,c|x)}{p_{1}^ {12}(y,c|A,B,x)}\sim\|AB-A_{0}B_{0}\|^{2}+\|B-B_{0}\|^{2} \tag{144}\]
holds and results in Theorem 3.1.
(3) In the case when \(i=2\) and \(j=1\), similar to the case of \((i,j)=(1,2)\), because of
\[\log\frac{q_{1}^{21}(y,c|x)}{p_{1}^{21}(y,c|A,B,x)} =\log\frac{\prod_{j=1}^{M}(s_{M}(A_{0}B_{0}x))_{j}^{y_{j}}\exp \left(-\frac{\gamma}{2}\|c-B_{0}x\|^{2}\right)}{\prod_{j=1}^{M}(s_{M}(ABx))_{j }^{y_{j}}\exp\left(-\frac{\gamma}{2}\|c-Bx\|^{2}\right)} \tag{145}\] \[=\log\prod_{j=1}^{M}(s_{M}(A_{0}B_{0}x))_{j}^{y_{j}}-\frac{\gamma }{2}(\|c\|^{2}-2\langle c,B_{0}x\rangle+\|B_{0}x\|^{2})\] (146) \[\quad-\log\prod_{j=1}^{M}(s_{M}(ABx))_{j}^{y_{j}}+\frac{\gamma}{2 }(\|c\|^{2}-2\langle c,Bx\rangle+\|B_{0}x\|^{2})\] (147) \[=\log\frac{\prod_{j=1}^{M}(s_{M}(A_{0}B_{0}x))_{j}^{y_{j}}}{\prod_ {j=1}^{M}(s_{M}(ABx))_{j}^{y_{j}}}+\frac{\gamma}{2}(\|Bx\|^{2}-2\langle c,(B-B _{0})x\rangle-\|B_{0}x\|^{2}), \tag{148}\]
we have
\[\iint dydcq_{1}^{12}(y,c|x)\log\frac{q_{1}^{12}(y,c|x)}{p_{1}^{12 }(y,c|A,B,x)} \tag{149}\] \[=\sum_{y\in\text{Onehot}(M)}\prod_{j=1}^{M}(u_{j}^{0})^{y_{j}}\log \frac{\prod_{j=1}^{M}(u_{j}^{0})^{y_{j}}}{\prod_{j=1}^{M}(u_{j})^{y_{j}}}+\frac {\gamma}{2}(\|(B-B_{0})x\|^{2}, \tag{150}\]
by using \(u=s_{M}(ABx)\) and \(u_{0}=s_{M}(A_{0}B_{0}x)\). Owing to Lemma B.3 and B.2, the first and second terms averaged by \(dxq^{\prime}(x)\) have same RLCTs of the average of \(\sum_{j=1}^{M-1}(u_{j}-u_{j}^{0})^{2}\) and \(\|B-B_{0}\|^{2}\), respectively. Since a map \((w)_{<M}\mapsto(s_{M}(w))_{<M}\) is analytic and isomorphic onto its image, we obtain
\[\int dxq^{\prime}(x)\sum_{j=1}^{M-1}(u_{j}-u_{j}^{0})^{2} =\int dxq^{\prime}(x)\|(u)_{<M}-(u_{0})_{<M}\|^{2} \tag{151}\] \[\sim\int dxq^{\prime}(x)\|(ABx)_{<M}-(A_{0}B_{0}x)_{<M}\|^{2}\] (152) \[=\int dxq^{\prime}(x)\|((A)_{<M}B-(A_{0})_{<M}B_{0})x\|^{2}\] (153) \[\sim\|(A)_{<M}B-(A_{0})_{<M}B_{0}\|^{2}. \tag{154}\]
Therefore, we have
\[\iiint dxdydcq^{\prime}(x)q_{1}^{12}(y,c|x)\log\frac{q_{1}^{12}(y,c|x)}{p_{1} ^{12}(y,c|A,B,x)}\sim\|(A)_{<M}B-(A_{0})_{<M}B_{0}\|^{2}+\|B-B_{0}\|^{2}. \tag{155}\]
Similar to the proof of Theorem 3.1, the zero point set of the above function is \(((A)_{<M},B)=((A_{0})_{<M},B_{0})\). This leads to the following:
\[\lambda_{1}^{12}=\frac{1}{2}(M+N-1)K, \tag{156}\] \[m_{1}^{12}=1. \tag{157}\]
(4) In the case when \(i=2\) and \(j=2\), the KL divergence can be developed in the same way as that in the case of \((i,j)=(1,2)\) and \((2,1)\). Therefore, we have
\[K_{1}^{22}(A,B) =\int dxq^{\prime}(x)\left(\sum_{y\in\mathrm{Onehot}(M)}\prod_{j= 1}^{M}(u_{j}^{0})^{y_{j}}\log\frac{\prod_{j=1}^{M}(u_{j}^{0})^{y_{j}}}{\prod_{ j=1}^{M}(u_{j})^{y_{j}}}\right. \tag{158}\] \[\left.+\gamma\sum_{c=(c_{k})_{k=1}^{K}\in\{0,1\}^{K}}\prod_{k=1}^{ K}(v_{k}^{0})^{c_{k}}(1-v_{k}^{0})^{1-c_{k}}\log\frac{\prod_{k=1}^{K}(v_{k}^{0} )^{c_{k}}(1-v_{k}^{0})^{1-c_{k}}}{\prod_{k=1}^{K}(v_{k})^{c_{k}}(1-v_{k})^{1-c_ {k}}}\right),\] (159) \[u=s_{M}(ABx),\ u_{0}=s_{M}(A_{0}B_{0}x),\ v=\sigma_{K}(Bx),\ v_{ 0}=\sigma_{K}(B_{0}x). \tag{160}\]
Similar to
\[\sum_{y\in\mathrm{Onehot}(M)}\prod_{j=1}^{M}(u_{j}^{0})^{y_{j}}\log\frac{\prod _{j=1}^{M}(u_{j}^{0})^{y_{j}}}{\prod_{j=1}^{M}(u_{j})^{y_{j}}}\sim\|(A)_{<M}B -(A_{0})_{<M}B_{0}\|^{2} \tag{161}\]
when \(i=2\) and \(j=1\) and
\[\gamma\sum_{c=(c_{k})_{k=1}^{K}\in\{0,1\}^{K}}\prod_{k=1}^{K}(v_{k}^{0})^{c_{k} }(1-v_{k}^{0})^{1-c_{k}}\log\frac{\prod_{k=1}^{K}(v_{k}^{0})^{c_{k}}(1-v_{k}^{0 })^{1-c_{k}}}{\prod_{k=1}^{K}(v_{k})^{c_{k}}(1-v_{k})^{1-c_{k}}}\sim\|B-B_{0}\| ^{2} \tag{162}\]
when \(i=1\) and \(j=2\), we obtain
\[K_{1}^{22}(A,B)\sim\|(A)_{<M}B-(A_{0})_{<M}B_{0}\|^{2}+\|B-B_{0}\|^{2}. \tag{163}\]
This is the same when \(i=2\) and \(j=1\) and
\[\lambda_{1}^{22}=\frac{1}{2}(M+N-1)K, \tag{164}\] \[m_{1}^{22}=1. \tag{165}\]
Based on (1), (2), (3), and (4) noted above, the theorem is therefore proved.
Proof of Theorem 4.2.: Put \(u=(u_{j})_{j=1}^{M}=s_{M}((UVx)^{\mathrm{y}})\), \(u_{0}=(u_{j}^{0})_{j=1}^{M}=s_{M}((U_{0}V_{0}x)^{\mathrm{y}})\), \(v=(v_{k})_{k=1}^{K}=\sigma_{K}((UVx)^{\mathrm{c}})\) and \(v_{0}=(v_{k}^{0})_{k=1}^{K}=\sigma_{K}((U_{0}V_{0}x)^{\mathrm{c}})\). Also, set \(\overline{U}=(U)_{\widetilde{\cong}M}\), \(\overline{U_{0}}=(U_{0})_{\leqq M}\), \(\underline{U}=(U)_{>M}\), and \(\underline{U}=(U_{0})_{>M}\).
(1) In the case when \(i=1\) and \(j=1\), this is same as Theorem 3.2.
(2) In the case when \(i=1\) and \(j=2\). Using the same expansion method as that of \(K_{1}^{12}(A,B)\) in the proof of Theorem 4.1, we have
\[\log\frac{q_{2}^{12}(y,c|x)}{p_{2}^{12}(y,c|U,V,x)} =\frac{1}{2}(\|(UVx)^{\mathrm{y}}\|^{2}-2\langle y,(UVx)^{\mathrm{ y}}-(U_{0}V_{0}x)^{\mathrm{y}}\rangle-\|(U_{0}V_{0}x)^{\mathrm{y}}\|^{2}) \tag{166}\] \[\quad+\gamma\log\frac{\prod_{k=1}^{H}(v_{k}^{0})^{c_{k}}(1-v_{k} ^{0})^{1-c_{k}}}{\prod_{k=1}^{H}(v_{k})^{c_{k}}(1-v_{k})^{1-c_{k}}}\] (167) \[=\frac{1}{2}(\|(UVx)_{\leqq M}\|^{2}-2\langle y,(UVx)_{\leqq M}-( U_{0}V_{0}x)_{\leqq M}\rangle-\|(U_{0}V_{0}x)_{\leqq M}\|^{2})\] (168) \[\quad+\gamma\log\frac{\prod_{k=1}^{H}(v_{k}^{0})^{c_{k}}(1-v_{k} ^{0})^{1-c_{k}}}{\prod_{k=1}^{H}(v_{k})^{c_{k}}(1-v_{k})^{1-c_{k}}}\] (169) \[=\frac{1}{2}(\|\overline{U}Vx\|^{2}-2\langle y,(\overline{U}V- \overline{U_{0}}V_{0})x\rangle-\|\overline{U_{0}}V_{0}x\|^{2})\] (170) \[\quad+\gamma\log\frac{\prod_{k=1}^{H}(v_{k}^{0})^{c_{k}}(1-v_{k} ^{0})^{1-c_{k}}}{\prod_{k=1}^{H}(v_{k})^{c_{k}}(1-v_{k})^{1-c_{k}}}. \tag{171}\]
Equally averaging similar to the proof of Theorem4.1 when \((i,j)=(1,2)\) and applying Lemma B.4, we obtain
\[K_{2}^{12}(U,V) =\int dxq^{\prime}(x)\left(\|(\overline{U}V-\overline{U_{0}}V_{0 })x\|^{2}\right. \tag{172}\] \[\quad+\gamma\sum_{c=(c_{k})_{k=1}^{d}\in\{0,1\}^{H}}\prod_{k=1}^{ H}(v_{k}^{0})^{c_{k}}(1-v_{k}^{0})^{1-c_{k}}\log\frac{\prod_{k=1}^{H}(v_{k}^{0}) ^{c_{k}}(1-v_{k}^{0})^{1-c_{k}}}{\prod_{k=1}^{H}(v_{k})^{c_{k}}(1-v_{k})^{1-c_{ k}}}\right)\] (173) \[\sim\int dxq^{\prime}(x)\left(\|(\overline{U}V-\overline{U_{0}}V_ {0})x\|^{2}+\|\sigma_{K}(\underline{U}Vx)-\sigma_{K}(\underline{U_{0}}V_{0}x) \|^{2}\right). \tag{174}\]
\(\sigma_{K}\) is analytic isomorphic onto its image and that leads the following:
\[K_{2}^{12}(U,V) \sim\int dxq^{\prime}(x)\left(\|(\overline{U}V-\overline{U_{0}}V _{0})x\|^{2}+\|\sigma_{K}(\underline{U}Vx)-\sigma_{K}(\underline{U_{0}}V_{0}x) \|^{2}\right) \tag{175}\] \[\sim\|\overline{U}V-\overline{U_{0}}V_{0}\|^{2}+\|\underline{U}V- \underline{U_{0}}V_{0}\|^{2}\] (176) \[=\|UV-U_{0}V_{0}\|^{2}. \tag{177}\]
Thus, all we have to compute is the RLCT of \(\|UV-U_{0}V_{0}\|^{2}\). This is the same as Theorem 3.2.
(3) In the case when \(i=2\) and \(j=1\). Since \((w)_{<M}\to s_{M}(w)\), \(w\in\mathbb{R}^{M-1}\) is analytic isomorphic onto its image, using the same method as in the case of \(i=1\) and \(j=2\), we have
\[K_{2}^{21}(U,V) \sim\int dxq^{\prime}(x)\left(\|s_{M}(\overline{U}Vx)-s_{M}( \overline{U_{0}}V_{0}x)\|^{2}+\|(\underline{U}V-\underline{U_{0}}V_{0})x\|^{2}\right) \tag{178}\] \[\sim\|(U)_{<M}V-(U_{0})_{<M}V_{0}\|^{2}+\|\underline{U}V-\underline {U_{0}}V_{0}\|^{2}\] (179) \[=\|(U)_{<M}V-(U_{0})_{<M}V_{0}\|^{2}+\|(U)_{>M}V-(U_{0})_{>M}V_{0} \|^{2}\] (180) \[=\|(U)_{\neq M}V-(U_{0})_{\neq M}V_{0}\|^{2}. \tag{181}\]
\((U)_{\neq M}\in\mathrm{M}(M+K-1,H)\) and \((U_{0})_{\neq M}\in\mathrm{M}(M+K-1,H_{0})\) hold. Therefore, the RLCT of \(\|(U)_{\neq M}V-(U_{0})_{\neq M}V_{0}\|^{2}\) is calculated by assigning \(M-1\) to \(M\) in Theorem 3.2. This leads to the following:
\[\lambda_{2}^{21} =\lambda_{2}(N,H,M-1,K,H_{0}), \tag{182}\] \[m_{2}^{21} =m_{2}(N,H,M-1,K,H_{0}). \tag{183}\]
(4) In the case when \(i=2\) and \(j=2\), combining the above methods, we have
\[K_{2}^{22}(U,V) \sim\int dxq^{\prime}(x)\left(\|s_{M}(\overline{U}Vx)-s_{M}( \overline{U_{0}}V_{0}x)\|^{2}+\|\sigma_{K}(\underline{U}Vx)-\sigma_{K}( \underline{U_{0}}V_{0}x)\|^{2}\right) \tag{184}\] \[\sim\|(U)_{<M}V-(U_{0})_{<M}V_{0}\|^{2}+\|\underline{U}V- \underline{U_{0}}V_{0}\|^{2}\] (185) \[=\|(U)_{\neq M}V-(U_{0})_{\neq M}V_{0}\|^{2}. \tag{186}\]
This results in the case when \(i=2\) and \(j=1\). Therefore, we obtain
\[\lambda_{2}^{22} =\lambda_{2}(N,H,M-1,K,H_{0}), \tag{187}\] \[m_{2}^{22} =m_{2}(N,H,M-1,K,H_{0}). \tag{188}\]
Therefore, based on (1), (2), (3), and (4), the theorem is proved.
Lastly, we prove Proposition 5.1.
Proof of Proposition 5.1.: Develop \(\lambda_{1}-\lambda_{2}\) and solve \(\lambda_{1}>\lambda_{2}\) for each cases. If they are resolved, the opposite case \(\lambda_{1}\leqq\lambda_{2}\) can immediately be derived.
(1) In the case of \(M+K+H_{0}\leqq N+H\) and \(N+H_{0}\leqq M+K+H\) and \(H+H_{0}\leqq N+M+K\). When \(N+M+K+H+H_{0}\) is even, we have
\[\lambda_{1}-\lambda_{2} =\frac{1}{8}\{4MK+4NK-2(H+H_{0})(N+M+K)+(N-M-K)^{2}+(H+H_{0})^{2}\} \tag{189}\] \[=\frac{1}{8}\{K^{2}+2(M+N-H-H_{0})K+N^{2}+M^{2}+H^{2}+H_{0}^{2}\] (190) \[\quad-2HN-2H_{0}N-2MN-2HM-2H_{0}M+2HH_{0}\}\] (191) \[=\frac{1}{8}\{K^{2}+2(M+N-H-H_{0})K+(M+N-H-H_{0})^{2}-4MN\}\] (192) \[=\frac{1}{8}\{(K+M+N-H-H_{0})^{2}-4MN\}. \tag{193}\]
The most right hand side of the above equation is a quadratic function of \(K\) and its coefficient of the greatest order term is positive. From this assumption, \(H+H_{0}\leqq M+N+K\), i.e. \(K\geqq H+H_{0}-M-N\) holds. Thus, solving \(\lambda_{1}>\lambda_{2}\), we obtain
\[K >H+H_{0}-M-N+2\sqrt{MN} \tag{194}\] \[=H+H_{0}-(\sqrt{M}-\sqrt{N})^{2}. \tag{195}\]
The converse can also be verified as follows:
\[\lambda_{1}>\lambda_{2}\Leftrightarrow K>H+H_{0}-M-N+(\sqrt{M}- \sqrt{N})^{2}. \tag{196}\]
When \(N+M+K+H+H_{0}\) is even, by following the same procedure as shown above, we have
\[\lambda_{1}-\lambda_{2}=\frac{1}{8}\{(K+M+N-H-H_{0})^{2}-4MN-1\}. \tag{197}\]
and
\[\lambda_{1}>\lambda_{2}\Leftrightarrow K>H+H_{0}-M-N+\sqrt{4MN +1}. \tag{198}\]
(2) In the case of \(N+H<M+K+H_{0}\), we have
\[\lambda_{1}-\lambda_{2} =\frac{1}{2}\{MK+NK-HN-H_{0}(M+K-H)\} \tag{199}\] \[=\frac{1}{2}\{(M+N-H_{0})K-H(N-H_{0})-MH_{0}\}. \tag{200}\]
Hence, we obtain
\[\lambda_{1}>\lambda_{2}\Leftrightarrow(M+N-H_{0})K>H(N-H_{0})+MH_{0}. \tag{201}\]
(3) In the case of \(M+K+H<N+H_{0}\), we have
\[\lambda_{1}-\lambda_{2} =\frac{1}{2}\{MK+NK-H(M+K)-H_{0}(N-H)\} \tag{202}\] \[=\frac{1}{2}\{(M+N-H)K-H_{0}(N-H)-MH\}. \tag{203}\]
Hence, we obtain
\[\lambda_{1}>\lambda_{2}\Leftrightarrow(M+N-H)K>H_{0}(N-H)+MH. \tag{204}\]
(4) In the case of \(N+M+K<H+H_{0}\), we have
\[\lambda_{1}-\lambda_{2} =\frac{1}{2}(MK+NK-NM-NK) \tag{205}\] \[=\frac{1}{2}M(K-N). \tag{206}\]
Hence, we obtain
\[\lambda_{1}>\lambda_{2}\Leftrightarrow K>N. \tag{207}\]
Therefore, based on (1), (2), (3), and (4), it can be said that this theorem holds.
|
2305.10952 | Actor-Critic Methods using Physics-Informed Neural Networks: Control of
a 1D PDE Model for Fluid-Cooled Battery Packs | This paper proposes an actor-critic algorithm for controlling the temperature
of a battery pack using a cooling fluid. This is modeled by a coupled 1D
partial differential equation (PDE) with a controlled advection term that
determines the speed of the cooling fluid. The Hamilton-Jacobi-Bellman (HJB)
equation is a PDE that evaluates the optimality of the value function and
determines an optimal controller. We propose an algorithm that treats the value
network as a Physics-Informed Neural Network (PINN) to solve for the
continuous-time HJB equation rather than a discrete-time Bellman optimality
equation, and we derive an optimal controller for the environment that we
exploit to achieve optimal control. Our experiments show that a hybrid-policy
method that updates the value network using the HJB equation and updates the
policy network identically to PPO achieves the best results in the control of
this PDE system. | Amartya Mukherjee, Jun Liu | 2023-05-18T13:21:38Z | http://arxiv.org/abs/2305.10952v1 | Actor-Critic Methods using Physics-Informed Neural Networks: Control of a 1D PDE Model for Fluid-Cooled Battery Packs
###### Abstract
This paper proposes an actor-critic algorithm for controlling the temperature of a battery pack using a cooling fluid. This is modeled by a coupled 1D partial differential equation (PDE) with a controlled advection term that determines the speed of the cooling fluid. The Hamilton-Jacobi-Bellman (HJB) equation is a PDE that evaluates the optimality of the value function and determines an optimal controller. We propose an algorithm that treats the value network as a Physics-Informed Neural Network (PINN) to solve for the continuous-time HJB equation rather than a discrete-time Bellman optimality equation, and we derive an optimal controller for the environment that we exploit to achieve optimal control. Our experiments show that a hybrid-policy model that updates the value network using the HJB equation and updates the policy network identically to PPO achieves the best results in the control of this PDE system.
**keywords:** Actor-Critic Method, Physics-Informed Neural Network, Fluid Cooled Battery Packs, Hamilton Jacobi Bellman Equation
## 1 Introduction
In recent years, there has been a growing interest in Reinforcement Learning (RL) for continuous control problems. RL has shown promising results in environments with unknown dynamics through a balance of exploration in the environment and exploitation of the learned policies. Since the advent of REINFORCE with Baseline, the value network in RL algorithms has shown to be useful towards finding optimal policies as a critic network (Sutton & Barto (2018)). This value network continues to be used in state-of-the-art RL algorithms today.
Proximal Policy Optimization (PPO) is an actor-critic method introduced by Schulman et al. (2017). It limits the update of the policy network to a trust region at every iteration. This ensures that the objective function of the policy network is a good approximation of the true objective function and forces smooth and reliable updates to the value network as well.
In discrete-time RL, the value function estimates returns from a given state as a sum of the returns over time steps. This value function is obtained by solving the Bellman Optimality Equation. On the other hand, in continuous-time RL, the value function estimates returns from a given state as an integral over time. This value function is obtained by solving a partial differential equation (PDE) known as the Hamilton-Jacobi-Bellman (HJB) equation Munos (1999). Both equations are difficult to solve analytically and numerically, and therefore the RL agent must explore the environment and make successive estimations.
The introduction of physics-informed neural networks (PINNs) by Raissi et al. (2019) has led to significant advancements in scientific machine learning. PINNs leverage auto-differentiation to compute derivatives of neural networks with respect to their inputs and model parameters exactly. This enables the laws of physics (described by ODEs or PDEs) governing the dataset of interest to act as a regularization term for the neural network. As a result, PINNs outperform regular neural networks on such datasets by exploiting the underlying physics of the data.
Control of PDEs is considered to be challenging compared to control of ODEs. Works such as Vazquez & Krstic (2017) introduced the backstepping method for the boundary control of reaction-advection-diffusion equations using kernels. For PDE control problems where the control input is encoded in the PDE, the HJB equation has been used (Sirignano & Spiliopoulos (2018),Kalise & Kunisch (2017)). Works from control of ODEs have been used by writing the PDE as an infinite-dimensional ODE.
To the best of our knowledge, this paper is the first to explore the intersection between PINNs and RL in a PDE control problem. We discretize the PDE as an ODE to derive an HJB equation. In order to force the convergence of the value network in PPO towards the solution of the HJB equation, we utilize PINNs to encode this PDE and train the value network. Upon deriving the HJB equation, we also derive an optimal controller. We introduce two algorithms: HJB value iteration and Hamilton-Jacobi-Bellman Proximal Policy Optimization (HJBPPO), that train the value function using the HJB equation and use the optimal controller. The HJBPPO algorithm shows superior performance compared to PPO and HJB value iteration on the PackCooling environment.
## 2 Preliminaries
### The 1D pack cooling problem
The 1D system for fluid-cooled battery packs was introduced by Kato & Moura (2021) and is modeled by the following coupled PDE:
\[u_{t}(x,t)= -D(x,t)u_{xx}(x,t)+h(x,t,u(x,t))+\frac{1}{R(x,t)}(w-u) \tag{1}\] \[w_{t}=-\sigma(t)w_{x}+\frac{1}{R(x,t)}(u-w), \tag{2}\]
with the following boundary conditions:
\[u_{x}(0,t)=u_{x}(1,t)=0 \tag{3}\]
\[w(0,t)=U(t) \tag{4}\]
where \(u(x,t)\) is the heat distribution across the battery pack, \(w(x,t)\) is the heat distribution across the cooling fluid, \(D(x,t)\) is the thermal diffusion constant across the battery pack, \(R(x,t)\) is the heat resistance between the battery pack and the cooling fluid, \(h(x,t,u)\) is the internal heat generation in the battery pack, \(U(t)\) is the temperature of the cooling fluid at the boundary, and \(\sigma(t)\) is the transport speed of the cooling fluid, which will be the controller in this paper.
The objective of the control problem in this paper is to determine \(\sigma(t)\) such that \(u(x,t)\) is as close to zero as possible. The transport speed \(\sigma(t)\) is strictly non-negative so the cooling fluid travels only in the positive \(x\)-direction. We restrict \(\sigma(t)\) to \([0,1]\).
### Hamilton-Jacobi-Bellman equation
To achieve optimal control for the 1D PDE pack cooling problem, we will utilize works from control theory for ODEs. Consider a controlled dynamical system modeled by the following equation:
\[\dot{x}=f(x,\sigma),\quad x(t_{0})=x_{0}, \tag{5}\]
where \(x(t)\) is the state and \(\sigma(t)\) is the control input. In control theory, the optimal value function \(V^{*}(x)\) is useful towards finding a solution to control problems (Munos et al. (1999)):
\[V^{*}(x)=\sup_{\sigma}\frac{1}{\Delta t}\int_{t_{0}}^{\infty}\gamma^{\frac{t }{\Delta t}}L(x(\tau;t_{0},x_{0},\sigma(\cdot)),\sigma(\tau))d\tau, \tag{6}\]
where \(L(x,\sigma)\) is the reward function, \(\Delta t\) is the time step size for numerical simulation, and \(\gamma\) is the discount factor. The following theorem introduces a criteria for assessing the optimality of the value function (Liberzon (2012), Kamalapurkar et al. (2018)).
**Theorem 2.1**.: _A function \(V(x)\) is the optimal value function if and only if:_
1. \(V\in C^{1}(\mathbb{R}^{n})\) _and_ \(V\) _satisfies the Hamilton-Jacobi-Bellman (HJB) Equation_ \[(\gamma-1)V(x)+\sup_{\sigma\in U}\{L(x,\sigma)+\gamma\Delta t\nabla_{x}V^{T}(x )f(x,\sigma)\}=0\] (7) _for all_ \(x\in\mathbb{R}^{n}\)_._
2. _For all_ \(x\in\mathbb{R}^{n}\)_, there exists a controller_ \(\sigma^{*}(\cdot)\) _such that:_ \[(\gamma-1)V(x)+L(x,\sigma^{*}(x))+\gamma\Delta t\nabla_{x}V^{T}(x )f(x,\sigma^{*}(x))\] \[=(\gamma-1)V(x)+\sup_{\hat{\sigma}\in U}\{L(x,\hat{\sigma})+ \gamma\Delta t\nabla_{x}V^{T}(x)f(x,\hat{\sigma})\}.\] (8)
The proof of part 1 of this theorem is in Appendix B. The HJB equation will be used in this paper to determine a new loss function for the value network \(V(x)\) in this pack cooling problem and an optimal controller \(\sigma^{*}(t)\).
## 3 Related work
The HJB equation we intend to solve is a first-order quasi-linear PDE. The use of HJB equations for continuous RL has sparked interest in recent years among the RL community as well as the control theory community and has led to promising works. Kim et al. (2021) introduced an HJB equation for Q Networks and used it to derive a controller that is Lipschitz continuous in time. This algorithm has shown improved performance over Deep Deterministic Policy Gradient (DDPG) in three out of the four tested MuJoCo environments without the need for an actor network. Wiltzer et al. (2022) introduced a distributional HJB equation to train the FD-WGF Q-Learning algorithm. This models return distributions more accurately compared to Quantile Regression TD (QTD) for a particle-control task. Finite difference methods are used to solve this HJB equation numerically. Furthermore, the authors mentioned the use of auto-differentiation for increased accuracy of the distributional HJB equation as a potential area for future research in their conclusion.
The use of neural networks to solve the HJB equation has been an area of interest across multiple research projects. Jiang et al. (2016) uses a structured Recurrent Neural Network to solve the HJB equation and achieve optimal control for the Dubins car problem. Tassa and Erez (2007) uses the Pineda architecture (Pineda (1987)) to estimate partial derivatives of the value function with respect to its inputs. They used the iterative least squares method to solve the HJB equation. This algorithm shows convergence in several control problems without the need for an initial stable policy.
RL for PDE control is a challenging field that has been of interest to the machine learning community lately. Farahmand et al. (2017) introduces the Deep Fitted Q Iteration to solve a boundary control problem for a 2D convection-diffusion equation. The model stabilizes the temperature in the environment without encoding any knowledge of the governing PDE. Sirignano and Spiliopoulos (2018) develops the DGM algorithm to solve PDEs. They use auto-differentiation to compute first-order derivatives and Monte Carlo methods to estimate higher-order derivatives. This algorithm was used to solve the HJB equation to control a stochastic heat equation and achieved an error of 0.1%. Kalise and Kunisch (2017) approximates the solution to the HJB equation using polynomials. This was used to control a semilinear parabolic PDE.
PINNs have been used for the control of dynamical systems in recent works. Antonelo et al. (2021) uses a PINN for model predictive control of a dynamical system over a long time interval. The PINN takes the initial condition, the control input, and the spatial and temporal coordinates as input and estimates the trajectory of the dynamical system while repeatedly shifting the time interval towards zero to allow for long-range interval predictions. Nicodemus et al. (2022) uses a PINN-based model predictive control for the tracking problem of a multi-link manipulator. Djeumou et al. (2022) uses a PINN to incorporate partial
knowledge about a dynamical system such as symmetry and equilibrium points to estimate the trajectory of a controlled dynamical system.
The use of a PINN to solve the HJB equation for the value network was done by Nakamura-Zimmerer et al. (2020) in an optimal feedback control problem setting. The paper achieves results similar to that of the true optimal control function in high-dimensional problems.
## 4 HJB control of the pack cooling problem
In this section, we will connect the pack cooling PDE model with the HJB equation to derive a new loss function for the value network \(V(u,w)\) using the HJB equation and an optimal controller. The HJB equation has been useful in finding optimal controllers for systems modeled by ODEs. In Kalise & Kunisch (2017), the controlled PDE system has been discretized in space to form an ODE that can be used in the HJB equation. Similarly, to form the HJB equation for this paper, we need to write equations 1 and 2 as an ODE.
### ODE discretization of PDE
We can write equations 1 and 2 as an ODE by discretizing it in the \(x\) variable. By letting \(\Delta x=\frac{1}{N_{x}}\) where \(N_{x}\) is the number of points we choose to discretize the system along the x-axis, we arrive at a \(2N_{x}\) dimensional ODE:
\[\dot{\hat{U}}=-DA\hat{U}+h(\hat{U})+\frac{1}{R}(\hat{W}-\hat{U}) \tag{9}\]
\[\dot{\hat{W}}=-\sigma(t)B\hat{W}+\frac{1}{R}(\hat{U}-\hat{W}), \tag{10}\]
where
\[\hat{W}(t)=\begin{pmatrix}w(x_{1},t)\\ \vdots\\ w(x_{N_{x}},t)\end{pmatrix},\hat{U}(t)=\begin{pmatrix}u(x_{1},t)\\ \vdots\\ u(x_{N_{x}},t)\end{pmatrix},\]
and \(A\hat{U}\) is a second-order discretization of \(u_{xx}\), e.g.,
\[[A\hat{U}]_{k}=\frac{u(x_{k+1},t)-2u(x_{k},t)+u(x_{k-1},t)}{\Delta x^{2}},\]
\(B\hat{W}\) is a second-order discretization of \(w_{x}\), e.g.,
\[[B\hat{W}]_{k}=\frac{w(x_{k+1},t)-w(x_{k-1},t)}{2\Delta x}.\]
### Derivation of the optimal controller
The ODE system derived in section 4.1 can be used in the HJB equation to determine a loss function and an optimal controller.
**Theorem 4.1**.: _Let \(u(\cdot,t),w(\cdot,t)\in L_{2}[0,1]\). With \(\sigma(t)\in[0,1]\) and the reward function \(L(U_{t},W_{t},\sigma_{t})=-||U_{t+1}||_{2}^{2}\Delta x\), the HJB equation for the 1D pack cooling problem is:_
\[(\gamma-1)V-||u(\cdot,t+\Delta t)||^{2}\] \[+\langle V_{u}(u(\cdot,t),w(\cdot,t)),u_{t}(\cdot,t)\rangle\] \[+\frac{1}{R}\langle V_{w}(u(\cdot,t),w(\cdot,t)),u(\cdot,t)-w( \cdot,t)\rangle\] \[+\max(0,-\langle V_{w}(u(\cdot,t),w(\cdot,t)),w_{x}(\cdot,t) \rangle)=0 \tag{11}\]
_where \(||\cdot||\) is the \(L_{2}[0,1]\) norm and \(\langle\cdot,\cdot\rangle\) is the \(L_{2}[0,1]\) inner product._
The proof of this theorem is in Appendix C. Theorem 2.1 shows that there exists a controller that satisfies equation 8. This allows us to determine an optimal controller, as shown in the following corollary:
**Corollary 4.2**.: _Let \(w(\cdot,t)\in L_{2}[0,1]\). With \(\sigma(t)\in[0,1]\) and the reward function \(L(U_{t},W_{t},\sigma_{t})=-||U_{t+1}||_{2}^{2}\Delta x\), provided the optimal value function \(V^{*}(u,w)\) with \(V^{*}_{w}(\cdot,t)\in L_{2}[0,1]\), the optimal controller for the 1D pack cooling problem is:_
\[\sigma^{*}(t)=\begin{cases}1,&\langle V^{*}_{w}(u(\cdot,t),w(\cdot,t)),w_{x}( \cdot,t)\rangle<0,\\ 0,&\text{otherwise},\end{cases} \tag{12}\]
_where \(\langle\cdot,\cdot\rangle\) is the \(L_{2}[0,1]\) inner product._
The proof of this corollary is in Appendix D. These results will be used in our algorithms to achieve optimal control of the pack cooling problem.
## 5 Algorithm
For the control of the PDE, we introduce two algorithms. The first algorithm, called HJB Value Iteration, uses only a value network and exploits the HJB equation and optimal controller derived in Theorem 4.1 and Corollary 4.2. The second algorithm, called HJBPPO, is a hybrid-policy model that uses policy network updates from PPO and value network updates from HJB Value Iteration.
To define these algorithms, we first define two loss functions. The first loss function is derived from the proof of theorem 4.1.
\[MSE_{f}=\frac{1}{T}\sum_{t=0}^{T-1} ((\gamma-1)V(\hat{U}_{t},\hat{W}_{t})\] \[-||\hat{U}_{t+1}||_{2}^{2}\Delta x\] \[+\nabla_{U}V^{T}(\hat{U}_{t},\hat{W}_{t})\hat{\hat{U}}_{t}\Delta t\] \[+\frac{1}{R}\nabla_{W}V^{T}(\hat{U}_{t}-\hat{W}_{t})\Delta t\] \[+\max(0,-\nabla_{W}V^{T}B\hat{W}))^{2}\Delta t \tag{13}\]
The second loss function provides an initial condition. At \(u(x,T)=0,w(x,T)=-R(x,t)=-2\), we have: \(u(x,T)=0\) and \(u_{t}(x,T)=0\). As a result, we have \(L(0,-R(x,t))=0\) and \(L_{t}(0,-R(x,t))=0\). This shows us that \(u(x,T)=0,w(x,T)=-R(x,t)\) is considered a stable point that maximizes the reward. Thus, we choose to let \(V(0,-R(x,t))=0\) be the Dirichlet boundary condition for the HJB equation. This leads to the second loss function:
\[MSE_{u}=(V(0,-R(x,t)))^{2}. \tag{14}\]
Since the value function achieves its global maximum at \(u(x,T)=0,w(x,T)=-2\), this means the derivatives of \(V\) must be zero along all directions. Thus, we choose to let \(\frac{\partial V}{\partial n}=0\) at \(u(x,T)=0,w(x,T)=-R(x,t)\) along every normal be the Neumann boundary condition for the HJB equation. This leads to the third loss function:
\[MSE_{n}=||\nabla_{U}V(0,-R(x,t))||_{2}^{2}+||\nabla_{W}V(0,-R(x,t))||_{2}^{2} \tag{15}\]
We derived an optimal controller in corollary 4.2. Gym environments recommend that actions be in the range \([-1,1]\). We can use the proof of the optimal controller in Appendix D to derive a way of selecting actions:
\[a_{t}=-\text{sign}(\nabla_{W}V^{T}B\hat{W}(t)) \tag{16}\]
The algorithms introduced in this paper will focus on minimizing both of the loss functions defined above and using the optimal controller.
```
1:Initiate value network parameter \(\phi\)
2:Run the control as given in equation (16) in the environment for \(T\) timesteps and observe samples \(\{(s_{t},a_{t},R_{t},s_{t+1})\}_{t=1}^{T}\).
3:Compute the value network loss as: \(J(\phi)=MSE_{f}+MSE_{u}+MSE_{n}\) described in equations (13), (14), and (15)
4:Update \(\phi\leftarrow\phi-\alpha_{2}\nabla_{\phi}J(\phi)\)
5:Run steps 2-4 for multiple iterations
```
**Algorithm 1** HJB Value Iteration
### HJB value iteration
The HJB Value Iteration trains the loss function without the need for an actor-network. We treat the value network as a PINN, using auto-differentiation to estimate gradient vectors to compute the loss in equation 13 and the control in equation 16. At every time step, it uses the controller given in equation 16. It updates the value network using the loss functions as shown above. The method is provided in Algorithm 1.
### Hjbppo
HJBPPO is an algorithm that combines policy optimization from PPO with HJB value iteration. This is implemented by modifying the PPO implementation by Barhate (2021).
To facilitate exploration of the environment and exploitation of the models, we introduce an action selection method that uses the policy network and equation 16 with equal probability, as shown in Algorithm 2. Upon running the policy \(\pi_{\theta}\), we sample from a distribution \(N(\mu,s)\) where \(\mu\) is the output from the policy network. We initiate \(s\) to \(0.3\) and decrease it by \(0.01\) every \(1000\) episodes until it reaches \(0.1\). After sampling an action from the normal distribution, we clip it between \(-1\) and \(1\).
This action selection method ensures that we select actions that are not only in \(\{-1,1\}\) but also in \([-1,1]\). It introduces a new method of exploration of the environment by choosing from two different methods of action selection. Actions selected using equation 16 are also stored in the memory buffer and are used to train the policy network \(\pi_{\theta}\). The method is provided in Algorithm 3.
We will train PPO, HJB value iteration, and HJBPPO on the PackCooling environment and compare these algorithms.
## 6 Results
### Training
To ensure the reproducibility of our results, we have posted our code in the following link:
[https://github.com/amartyamukherjee/PPO-PackCooling](https://github.com/amartyamukherjee/PPO-PackCooling). We posted our hyperparameters in Appendix E. The details of the implementation of the PackCooling gym environment are posted in Appendix A. The code was run using Kaggle CPUs. Each algorithm was trained for a million timesteps. Training each algorithm took approximately 5 hours.
```
1:Initiate policy network parameter \(\theta\) and value network parameter \(\phi\)
2:Run action selection as given in algorithm 2 in the environment for \(T\) timesteps and observe samples \(\{(s_{t},a_{t},R_{t},s_{t+1})\}_{t=1}^{T}\).
3:Compute the advantage \(A_{t}\)
4:Compute \(r_{t}(\theta)=\frac{\pi_{\theta}(a_{t}|s_{t})}{\pi_{\theta_{\text{old}}}(a_{t} |s_{t})}\)
5:Compute the objective function of the policy network: \[L(\theta)=\frac{1}{T}\sum_{t=0}^{T-1}\min[r_{t}(\theta)A_{t},\text{clip}(r_{t}( \theta),1-\epsilon,1+\epsilon)A_{t}],\]
6:Update \(\theta\leftarrow\theta+\alpha_{1}\nabla_{\theta}L(\theta)\)
7:Compute the value network loss as: \(J(\phi)=MSE_{f}+MSE_{u}+MSE_{n}\) described in equations (13), (14), and (15)
8:Update \(\phi\leftarrow\phi-\alpha_{2}\nabla_{\phi}J(\phi)\)
9:Run steps 2-8 for multiple iterations
```
**Algorithm 3** HJBPPO
### Reward Curves
The reward curves have been plotted in Figure 1, comparing PPO, HJB value iteration, and HJBPPO. Each algorithm was run for 5 different seeds. We plotted the mean reward over each seed and over 20 consecutive episodes, and shaded the area 0.2 standard deviations from the mean. HJB value iteration shows the worst performance, as its rewards decrease past PPO after training for multiple episodes. PPO shows a rapid increase in average rewards after the first episode and a slow increase in average rewards afterward. HJBPPO shows the best performance in the graph, achieving the highest average reward in each episode and an increase in average rewards after training for multiple episodes.
The significantly higher average reward in HJBPPO in the first episode shows that the action selection method described in Algorithm 2 provides a robust strategy to explore the environment and train the models. The higher average rewards are due to the exploitation of the dynamics of the environment as done by the HJB equation.
### Trajectories
The plots of the trajectories have been posted in Appendix F. After training for a million timesteps, we tested our models on the PackCooling environment and produced the plots. These plots were generated using the rendering feature explained in section A.4.
The trajectory of HJB value iteration shows the worst results. \(\sigma(t)\) returns 1.0 only once. It achieves a cumulative reward of \(-7294.51\). Thus, the input of the cooling fluid from the boundary is minimal. As a result of the internal heat generation in the battery pack, \(u(x,t)\) reaches high values of roughly 5 at \(t=10\), and as a result, \(w(x,t)\) also reaches high values of roughly 4. This shows that the training of the value function in HJB value iteration is inadequate and we have not arrived at an optimal controller for the pack cooling problem. This is because exploration of the environment was at a minimum, as we only exploited equation 16 at each time step.
The trajectory of PPO shows that the values \(\sigma(t)\) takes at every timestep have a large variance with its neighboring timesteps. It achieves a cumulative reward of \(-3970.02\). Control of the temperature of the battery pack has been achieved as \(u(x,t)\) takes values between \(-2\) and \(2\) at \(t=10\).
The trajectory of u(x,t) with HJBPPO shows it takes values between \(-2\) and \(2\) at \(t=10\). The values \(\sigma(t)\) takes at every timestep have a lower variance with its neighboring timesteps compared to PPO. It achieves a cumulative reward of \(-881.55\). For \(t\in[4,6]\), \(u(x,t)\) shows an increasing trend towards \(u=2\). In response, the controller \(\sigma(t)\) took values closer to 1.0 to allow for greater input of cooling fluid from the boundary so that \(u(x,t)\) decreases towards zero. Due to higher average rewards as shown in Figure 1, this shows that a model that exploits the dynamics of the environment to return a controller shows improved performance compared to a model that returns noisy control centered at \(\sigma=0.5\).
## 7 Conclusion
In this paper, we have introduced two algorithms that use PINNs to solve the pack cooling problem. This paper combines PINNs with RL in a PDE control setting. In the HJB value iteration algorithm, the HJB equation is used to introduce a loss function and a controller using a value network. The HJBPPO algorithm is a hybrid-policy model that combines the training of the value network from HJB value iteration and the training of the policy network from PPO. HJBPPO shows an overall improvement in performance compared to PPO due to its ability to exploit the physics of the environment to improve the learning curve of the agent.
## 8 Future research
Despite showing an overall improvement in the reward curves, the HJBPPO algorithm leaves room for improved RL algorithms using PINNs.
In this paper, we computed the HJB equation by expressing the PDE as an ODE by discretizing in \(x\). This was possible because the pack cooling problem was modeled by 1D PDEs. Currently existing works such as (Sirignano & Spiliopoulos (2018)) and (Kalise & Kunisch (2017)) solve the HJB equation for 1D PDEs by discretizing it in \(x\). It will be interesting to see how HJB control can be extended to higher dimensional PDEs.
The goal of PINNs is to solve PDEs without the need for numerical methods. In this paper, we solved the pack cooling problem numerically using the Crank-Nicolson method and the method of characteristics. An area for further research may be the use of PINNs to solve for the HJB equation and the PDE that governs the dynamics of the system.
In the PackCooling environment, the HJBPPO algorithm showed an improvement compared to PPO. But this is due to the fact that we knew the dynamics of the system, thus allowing for the physics of the environment to be exploited. The environments give all the details of the state needed to choose an action. One limitation of HJBPPO is that it may not perform well in partially observable environments because the estimate of the dynamics of the system may be inaccurate. Deep Transformer Q Network (DTQN) was introduced by Esslinger et al. (2022) and achieves state-of-the-art results in many partially observable
Figure 1: Reward curves of PPO (red), HJB value iteration (blue), and HJBPPO (green) averaged over 5 seeds. Shaded area indicates 0.2 standard deviations.
environments. A potential area for further research may be the introduction of an HJB equation that facilitates partial observability. The DTQN algorithm may be improvised by incorporating this HJB equation using PINNs.
|
2303.15568 | Bridging the Gap: Applying Assurance Arguments to MIL-HDBK-516C
Certification of a Neural Network Control System with ASIF Run Time Assurance
Architecture | Recent advances in artificial intelligence and machine learning may soon
yield paradigm-shifting benefits for aerospace systems. However, complexity and
possible continued on-line learning makes neural network control systems (NNCS)
difficult or impossible to certify under the United States Military
Airworthiness Certification Criteria defined in MIL-HDBK-516C. Run time
assurance (RTA) is a control system architecture designed to maintain safety
properties regardless of whether a primary control system is fully verifiable.
This work examines how to satisfy compliance with MIL-HDBK-516C while using
active set invariance filtering (ASIF), an advanced form of RTA not envisaged
by the 516c committee. ASIF filters the commands from a primary controller,
passing on safe commands while optimally modifying unsafe commands to ensure
safety with minimal deviation from the desired control action. This work
examines leveraging the core theory behind ASIF as assurance argument
explaining novel satisfaction of 516C compliance criteria. The result
demonstrates how to support compliance of novel technologies with 516C as well
as elaborate how such standards might be updated for emerging technologies. | Jonathan Rowanhill, Ashlie B. Hocking, Aditya Zutshi, Kerianne L. Hobbs | 2023-03-27T19:35:59Z | http://arxiv.org/abs/2303.15568v1 | Bridging the Gap: Applying Assurance Arguments to MIL-HDBK-516C Certification of a Neural Network Control System with ASIF Run Time Assurance Architecture +
###### Abstract
Recent advances in artificial intelligence and machine learning may soon yield paradigm-shifting benefits for aerospace systems. However, complexity and possible continued on-line learning makes neural network control systems (NNCS) difficult or impossible to certify under the United States Military Airworthiness Certification Criteria defined in MIL-HDBK-516C. Run time assurance (RTA) is a control system architecture designed to maintain safety properties regardless of whether a primary control system is fully verifiable. This work examines how to satisfy compliance with MIL-HDBK-516C while using active set invariance filtering (ASIF), an advanced form of RTA not envisaged by the 516c committee. ASIF filters the commands from a primary controller, passing on safe commands while optimally modifying unsafe commands to ensure safety with minimal deviation from the desired control action. This work examines leveraging the core theory behind ASIF as assurance argument explaining novel satisfaction of 516C compliance criteria. The result demonstrates how to support compliance of novel technologies with 516C as well as elaborate how such standards might be updated for emerging technologies.
## 1 Introduction
Recent advances in reinforcement learning (RL) have demonstrated neural network control systems (NNCS) with better than human performance in military flight scenarios [1]. However, one of the biggest factors limiting operational use of NNCS is airworthiness certification. Traditional verification requires hundreds of billions of hours of testing to assure safety [2], which is intractably time consuming and expensive, presenting a significant barrier to use of NNCS on relevant timelines. The Military Airworthiness Certification Criteria, MIL-HDBK-516C [3], implicitly assume that the control function itself (e.g. proportional-integral-derivative (PID) control) can be directly verified against the plant/vehicle as a set of simple linear control functions[4]. Analytical verification techniques have yet to be matured for many complex controller designs, including adaptive and neural network controllers. Some progress has been made in the direct verification of increasingly complex neural network controllers[5, 6], however it remains immature. Emerging satisfiability modulo theory (SMT) and other techniques can operate linearly against many benchmarks, although worst-case performance on arbitrary problems will remain exponential [5, 7]. The result is that for many problems, direct analysis of neural network control systems may one day be within reach of direct verification using formal methods techniques; however, an alternative approach is needed to field NNCS near term.
Meanwhile, advances in run time assurance (RTA) [8] technology present a path to enable rapid and safe introduction of NNCS into operational use. RTA-based control architectures filter an unverified primary controller input by altering unsafe control inputs to explicitly assure safety. When a primary, performance-driven controller is complex, driven by
difficult to verify technologies (e.g. a neural network), or dynamically programmed (e.g. learning during operation), RTA can intervene to assure vehicle properties that might otherwise be intractable or impractically expensive to verify.
What is needed, therefore, is a control architecture that effectively pairs an NNCS with an effective RTA, and a means to certify airworthiness of the result. On the latter point, note that [4] demonstrates that a reversionary switching (simplex) RTA can be shown to satisfy MIL-HDBK-516C criteria for a simple adaptive controller. RTA mechanisms such as simplex [9] and active set invariance filtering (ASIF) [10] are backed by models and reasoning for how they effect safe control. If this reasoning can be formally captured and applied correctly to verification, it could enhance the potential for verification of safety-critical, complex control.
The contributions of this work are as follows.
1. The first development of an NNCS-RTA architecture as an abstracted MIL-HDBK-516C system processing architecture (SPA).
2. The first development of an argument-based analysis of an NNCS-RTA to satisfy otherwise difficult to verify MIL-HDBK-516C criteria for the NNCS.
3. Development and presentation of high-assurance compliance methods based on the developed assurance arguments.
The sections of this work are as follows. Section II introduces the concept of an NNCS-ASIF-RTA architecture, safety control properties, and basic verification of vehicle behavior through control with and without RTA. It then shows how assurance argument can represent the reasoning and evidence used to assure safety with RTA. Section III presents the NNCS-ASIF-RTA as an abstract system processing architecture (SPA), the results of an investigation of how the use of the architecture impacts conformance with MIL-HDBK-516C certification criteria from MIL-HDBK-516C sections 14 and 15 and examples of where assurance arguments about the NNCS-ASIF-RTA design, backed by verification evidence, can provide a fitting verification method for those criteria. The paper concludes in Section IV with a consideration for how such assurance arguments for novel technologies might aid in amendment of existing standards.
## II The NNCS-ASIF-RTA Architecture and its Safety Rationale
This section describes key concepts in this work: ASIF-RTA of NNCS, and safety reasoning for the architecture presented as assurance arguments. In the absence of comprehensive verification approaches for NNCS, an RTA enables strong verification of plant safety properties on the basis of control input filter behavior rather than direct verification of the NNCS function against the plant's entire state space. While this work will focus on an ASIF-RTA and arguments for how it achieves safety, one might apply the same approach to a reversionary RTA, though the required assurance arguments would be very different.
### Run Time Assurance of Neural Network Control Systems
Consider an NNCS, as illustrated in Figure 1, that is trained to provide a performance-driven control input to the plant, e.g. an air, sea, space, or ground vehicle. In this design, an NNCS provides control input that is meant for actuation by the plant. However, an ASIF-RTA unit first receives the control input and filters it, modifying the control input if necessary to maintain safety properties of the plant in a verifiable way. Even if the NNCS cannot provide safe control input, the ASIF-RTA assures that input arriving at the plant is safe.
To show compliance with certification criteria, the plant must maintain certain properties, \(V_{p}\) that are important for safety and airworthiness. Examples of such properties include collision avoidance [11] and the vehicle remaining within a geofence [12]. Without RTA, control signals from the NNCS would always be delivered without modification to the plant. In that case, \(V_{p}\) is typically assured through inductive argument similar to
\[C_{v}\wedge V_{v}\ \therefore_{I}\ V_{p} \tag{1}\]
Fig. 1: A neural network control system paired with an active set invariance filter in closed loop control.
where \(C_{v}\) and \(V_{v}\) are properties that must be verified for the controller and the vehicle, respectively, and \(\therefore\)\(\prime\) is to be read as 'therefore' applied to a conclusion of inductive reasoning. Eq. 1 reads "controller properties are verified and vehicle properties are verified, therefore, safety and airworthiness properties are satisfied." In terms of a MIL-HDBK-516C, \(C_{v}\) and \(V_{v}\) are certification criteria of the standard. Intrinsic to the standard is an informal or unrecorded rationale for why the criteria of the standard are sufficient for airworthiness.
When RTA is introduced to assure safety of the NNCS, this changes the required verification properties for the design to the form
\[C_{v}^{\prime}\wedge R_{v}\wedge V_{v}^{\prime}\therefore\text{\text{\textminus}} \,V_{p} \tag{2}\]
where potentially more easily verifiable property sets \(C_{v}^{\prime}\), \(V_{v}^{\prime}\), \(R_{v}\) are for the controller, vehicle, and RTA, respectively, even if the primary controller is complex, opaque, or dynamically programmed (e.g., an NNCS). Eq. 2 reads "a simpler set of controller properties are verified, RTA properties are verified, and a simpler set of plant properties are verified, therefore, safety and airworthiness properties are satisfied."
The advantage of including RTA is, as stated above, that the required properties on the controller, vehicle, and RTA might be easier to verify than the set of properties that would otherwise be required on the controller and vehicle without the RTA. There are two resulting challenges.
1. Existing standards often do not directly align their criterion and verification methods with the target verification properties of the RTA approach. Instead, they often assume that vehicle properties will be assured in a form similar to Eq. 1, without support for the introduction of RTA and modified verification needs.
2. The properties to verify control using an RTA are often many and must be very carefully considered. RTAs such as Simplex and ASIF are backed by nuanced design with very particular verification needs. Simplex requires careful consideration of monitoring, backup control design, and switch decision making [13], while ASIF requires careful determination of resulting control properties under active control signal modification backed by mathematical models [8]. How the required reasoning adds up to the desired vehicle properties must be expressed to the conformance reviewers, otherwise a set of granular and detailed evidence that only indirectly verifies plant safety must be accepted without justification.
In this work, the above issues are addressed by identifying those criterion of MIL-HDBK-516C that assume verification in the form of Eq. 1 and then presenting the reasoning and evidence backing Eq. 2. Assurance arguments are chosen as the format for presenting this reasoning and evidence.
### B. Assurance Arguments for Verification of System Properties
The reasoning and evidence behind verification using RTA (Eq. 2) can be represented as an assurance argument. Assurance arguments are utilized in some safety cultures (e.g. United Kingdom Nuclear Plant Designs, United Kingdom Ministry of Defense Aircraft, Mining Vehicles (Victoria, Australia, EU Airspace, US FDA for Medical Devices) to present a safety case [14], in which the safety of a system is argued and evidence presented. Arguments are increasingly used in assurance of novel and emerging technologies, such as in conformance with the UL4600 autonomous vehicle standard [15]. Assurance arguments utilize inductive reasoning, and where possible, deductive reasoning, to show why a claim is true. They can be written as text, but also in modeling languages such as Goal Structuring Notation (GSN) [16].
An example generic assurance argument for a system property is illustrated in GSN notation in Figure 2. In this model, the argument is represented as a tree with the root node (top rectangle) being a _thesis_ about a desired property of a system. If this claim could be sufficiently verified by evidence alone (e.g. comprehensive testing), the argument could end with an _evidence_ node (circle) attached beneath the root claim, and that would be the end of the argument. But where such direct verification is not possible (as with Eq. 2), further reasoning that is sufficient to verify the root claim is applied. A _strategy_ node (parallelogram) describes the method of reasoning over the sum of sub-claims that is sufficient to assure (to the desired level of rigour), the thesis claim. These _sub-claims_ (rectangles) can be supported by direct evidence or further strategy and sub-claims. In the figure, they are supported by evidence. This evidence can include static evidence (e.g. textbook knowledge), design time evidence (e.g. testing), run-time evidence (e.g. health monitoring), or other forms of evidence as appropriate.
In the work presented in this paper, assurance arguments are utilized to capture the reasoning behind an RTA design by which evidence collected about the RTA and controller satisfy Eq. 2. Arguments are only used where satisfaction of MIL-HDBK-516C criteria cannot be assured or is not maximally assured using the conventional expectations of listed criterion verification methods.
## III An NNCS-ASIF-RTA Safety Case
Three key pieces of work were performed to develop sufficient conformance potential for an NNCS-ASIF-RTA architecture as follows:
1. **Abstract NNCS-ASIF-RTA SPA**: An NNCS-ASIF-RTA architecture was developed as an abstract System Processing Architecture (SPA).
2. **Conformance Analysis**: The ability of the NNCS-ASIF-RTA to sufficiently satisfy MIL-HDBK-516C conformance analysis criteria of sections 14 and 15 using standard-specified verification methods was analyzed, and where not feasible, assurance arguments were developed as an alternative verification method.
3. **Assurance Arguments and Methods**: Arguments were developed for selected criteria and the resulting assurance method codified via the required evidence and procedures.
The remainder of this section discusses these activities and analysis in more detail.
### Architecture Specification
An abstract NNCS-ASIF-RTA system processing architecture (SPA) was developed to comply with MIL-HDBK-516C as illustrated in Figure 3. The numbering of signals corresponds to a control structure block diagram of the SPA within a larger system documented in [17]. The design consists of NNCS and ASIF RTA components, discussed previously, that together send control input to the plant. In addition, a command component (CC) configures and operates control modes for NNCS and ASIF RTA. A recorder element provides non-safety critical logging. All components are safety supporting elements (SSEs) except for the recorder. Functions and function threads were identified for each of the components and functional requirements assigned. General vehicle hazards and severity of loss was tied to threads and all threads except recorder monitoring were deemed safety critical.
Fig. 2: A generic assurance argument for a system property expressed in goal structured notation.
### Compliance Planning
Autonomous control-relevant criteria of MIL-HDBK-516C Sections 14 and 15 were identified and analyzed using the method depicted by the flowchart in Figure 4 to determine where indirect verification through assurance argument was necessary to achieve compliance. First, if the compliance methods stated for the criteria were sufficient and practicable, then no further analysis was performed on the subject criterion. However, if a criterion was deemed not easily satisfied using the stated compliance method for a complex and/or online-learning NNCS, then it was considered a candidate criterion for use of argument. Arguments were developed for the select criterion, and then applied as a specialized method of compliance under the use of ASIF-RTA.
Many criteria from the analyzed sections are relevant to the NNCS-ASIF-RTA control function and would be fulfilled in a compliance effort. In MIL-HDBK-516C Sections 14 and 15, 2 criteria were identified as candidates for indirect verification under ASIF theory: 14.3.3 and 15.2.3.
Table 1 presents applicability and limitations of indirect verification for the two criteria recognized as difficult to achieve for a complex NNCS from MIL-HDBK-516C Sections 14 and 15. Criterion 14.3.3 requires evaluation of
\begin{table}
\begin{tabular}{l|l|l} Criterion & NNCS Conformance Difficulty & Argument Approach Limitations \\ \hline
14.3.3: Evaluation of software for elimination of hazardous events & Complete verification of coverage of the NNCS control function for elimination of hazards caused by control direction is impractical. & Only applicable to hazards negated by ASIF-enforced safety constraints. \\
15.2.3: Integration Methodology & Complete verification coverage of the NNCS control function is impractical. & None \\ \end{tabular}
\end{table}
Table 1: Criteria Benefiting from Indirect Verification under Strong ASIF RTA Theory
Figure 4: Application of argument to strong indirect verification of MIL-HDBK-516C criteria
Figure 3: An abstract SPA for an NNCS-ASIF RTA subsystem.
software for elimination of contribution to hazards. Argument that can show the correct and failure-free operation of the NNCS-ASIF-RTA control function, where the ASIF-RTA enforces safety constraints that entail prevention of specific hazardous states, assures that the control function does not contribute to those specific hazardous states. Such arguments will not cover all hazards, but instead those covered by the ASIF-RTA's safe control output properties. This would partially satisfy the criterion. Criterion 15.2.3 requires a verification plan for all functions of a developed SPA. This includes verification of functional requirements. A main requirement of the presented SPA's control function is to control the plant so as to continuously satisfy a specified set of safety constraints. Given that ASIF RTA provides strong design reasoning that can be incrementally tested, assurance arguments were developed to capture this reasoning and evidence. Assurance arguments developed for the NNCS-ASIF RTA were fit to the criterion and evaluated for strength and effectiveness.
### C. Development and Application of Arguments
In order to satisfy the above criteria, a functional safety argument was constructed. This argument asserts that the control input generated by the NNCS-ASIF-RTA for the plant satisfies the safety constraints against which the ASIF-RTA was designed. The top-level argument is presented in Figure 5. The root level claim of the argument states:
_Functional Safety of the NNCS-RTA_: The NNCS-RTA correctly provides sufficient assurance that the safety of the NNCS-RTA combined control system, which depends on the RTA component to operate correctly, is assured even when the NNCS outputs an unsafe control input.
_Functional safety_ is defined as the NNCS-ASIF-RTA outputting a _functionally safe control signal_, which is defined as:
_Functionally Safe Control Signal from the NNCS-RTA_: A control signal from the RTA to the plant, if performed and correctly actuated by the plant, maintains a set of safety constraints on the plant at control actuation time and at all future time when the plant remains under uninterrupted control of the NNCS-RTA.
Support for the above claim is divided between partial direct verification of the claim and reasoning and evidence based on the ASIF-RTA design and its verification. Although direct verification of functional safety cannot be fully assured for a complex NNCS or ASIF-RTA, traditional methods of compliance (simulation, model-based execution, etc.) can show that the SPA functions as intended for representative scenarios and edge cases. Such direct evidence _supports_ the argument, but does not sufficiently demonstrate the claim.
The remainder of assurance is based on ASIF-RTA theory and design, backed by verification evidence. It is argued that the safety constraints hold because all control input from the NNCS is filtered through the ASIF RTA, and the ASIF RTA only outputs control input that satisfies the safety constraints on the plant. This is in turn argued by claiming that there is a timely and reliable algorithm to support the above claim and that it can be implemented effectively. The former claim is satisfied by argument concerning the core theory of an explicit ASIF algorithm that requires development of valid and verified explicit control barrier functions, and that such barrier functions have been developed for the intended safety constraints.
Finally, the entire core argument is strengthened by verifying that faults of an implementation of the SPA are eliminated by specification of the SPA, functional requirements, and explicit safety requirements on the SPA. For the purpose of paper scope, argument for failure and fault mitigation, as well as partial direct verification for representative scenarios (and edge cases) are not presented, as the standard methods of verification found in MIL-HDBK-516C are applicable.
#### 1. Argument for ASIF RTA
The argument for an effective explicit ASIF algorithm contends that a reliable and sufficiently performant algorithm exists based on proof in the literature and characterization of the algorithm as a quadratic search problem. In addition, The core induction property of active set invariance asserts that if the present state of the plant satisfies a set of explicit control barrier functions defined for safety constraints, then the ASIF filter can also satisfy those barrier functions in the next iteration. This requires careful construction of barrier functions that assure safety constraints are satisfied and that the next iteration of control will be able to satisfy them. Papers detailing the mathematics are referred to as evidence.
What remains to be argued is that control barrier functions are constructed for each safety constraint of the plant correctly and robustly. They must also be co-solvable as a group by the explicit ASIF algorithm. Co-solution is evidenced through mathematical analysis.
The argument for correct and robust barrier functions is too large to present in a figure. It argues over a specialization of the'success pattern' [18] to show that correct and complete context, definition, and evaluation of control barrier
Figure 5: **The functional safety argument applied to satisfaction of criteria.**
functions has taken place. For example, in arguing sufficient definitions and context, it requires various forms of evidence verifying appropriate vehicle and environment models. It also requires documentation and review showing that control barrier functions are derived from safety constraints and control models correctly, experiments to show that tracking behavior of control is as expected, and demonstration of expected responses, corner cases, and robustness of response under perturbations.
### **D. Resulting Compliance Method**
Having built the arguments, the method of compliance for criteria 14.3.3 and 15.2.3 can be summarized by the concerns of the argument and the evidence used to support each concern. The method of compliance consists of evidence that must be collected (with an understanding of the backing rationale) in order to satisfy the developed assurance arguments. An example of resulting verification methods is given in Table 2, showing the evidence that must be collected to assure correct barrier functions.
Overall, the developed assurance arguments require 51 pieces of evidence, each of which can be thought of as part of this specialized verification method for the above criteria. Table 3 shows a categorization of the evidence by type. A significant quantity of evidence involves review and development of analytical models which are tested in validated simulations. Some static analysis is performed on the architecture to assure control signal feeds at the right times. Safety requirement development is assured with a STAMP/STPA method and evidence, and many other categories of evidence are required to assure that assumptions and guarantees of plant, control, and ASIF RTA align.
It would be expected that the resulting evidence be collected for any application of the provided ASIF-NNCS-RTA in order to satisfy criteria 14.3.3 and 15.2.3 using the above arguments. If such evidence is supplied and found satisfactory,
\begin{table}
\begin{tabular}{l l}
**Claim** & **Evidence** \\ \hline Correct Safety Constraint & Constraint Specification \\ & Peer Review \\ & Authority Sign-Off \\ Sufficient Entity Dynamics Models & Model Analysis and Review \\ & Higher-Fidelity Simulation and Sim Validity \\ & Checks \\ Sufficient Env. Disturbance Models & Table of Limits (Parameters) \\ & Standards and Literature Review \\ & Domain Expert Review \\ Sufficient Tracking Models & Models Analysis \\ Known Control States, Rates, and Latencies & Documentation \\ Correct Mathematical Derivation of Barrier & \\ Functions & Mathematical Derivation \\ & Peer and Domain Expert Review \\ Acceptable Tracking Error & Simulation Results \\ Empirical Correct Control Examples & Representative Plant Plots \\ Empirical Verification for Corner Cases & Corner Case Analysis \\ & Peer and Domain Expert Review \\ & Resulting Plant Plots \\ Empirical Verification for Off-Nominal Conditions & Model Analysis \\ & Peer and Expert Review of Conditions \\ & Resulting Plant Plots \\ \end{tabular}
\end{table}
Table 2: ASIF-Protected Control Compliance Method (Continued)
then it should be the case that the criteria are satisfied to the extent that the arguments are strong. In addition, assurance arguments should be constantly updated as flaws or additional concerns are discovered. This can in turn lead to new required evidence.
## 4 Summary and Conclusions
This presented work demonstrates how an NNCS safeguarded by RTA for specific safety properties on a controlled plant (e.g. vehicle) can satisfy airworthiness criteria of MIL-HDBK-516C that can be difficult or cost-prohibitive to satisfy using traditional verification methods. The work focused on sections 14 and 15; however, future work could more fully analyze the MIL-HDBK-516C standard's criteria. Limited time prevented analysis of many important sections of the document, for example, sections 4 and 6, related to systems and control behavior. The completed work involved careful selection of an RTA architecture with strong control properties, namely an active set invariance filtering RTA, and development of alternate verification methods for some criteria of the standard. The alternative verification method consisted of assurance arguments, modeled in GSN. The result is a set of evidence that must be collected in order to satisfy the arguments, which in turn should satisfy the criteria, and therefore represent an alternative verification method. An overview of the arguments and required evidence was presented.
In conclusion, we expect that the approach taken in this work might be valuable as other new and novel architectural innovations are applied to aerial systems, such as for technologies in the autonomous vehicle domain. The quality of the resulting arguments must always be vetted by domain and regulatory experts, but can be part of assessing novel technologies by responsible parties. Furthermore, the resulting alternative verification methods can be rigorous and detailed, and the resulting argument and evidence models can serve as input to future iterations of standards to which they are applied should the covered technology end up becoming commonplace. For less common technologies, such alternative verification methods might be placed in an annex of conformance capabilities. The NNCS-ASIF-RTA MIL-HDBK-516C section 15 compliance case can be viewed at www.dependablecomputing.com/nncs-asif-rta/case.html.
## Acknowledgments
The authors would like to thank Matt Dillsaver, John Schierman, David Kapp, Ray Garcia, Natasha Neogi, Mallory Graydon, Michael Holloway, Suresh Kannan, Sean Regisford, Benjamin Heiner, and others for their input and feedback on this work. This work was supported by the Test Resource Management Center and the Air Force Research Laboratory ADIDRUS Contract. The views expressed are those of the authors and do not reflect the official guidance or position of the United States Government, the Department of Defense or of the United States Air Force.
\begin{table}
\begin{tabular}{l l}
**Type** & **Count** \\ \hline Proof, Equations, and Mathematical Analysis & 11 \\ Requirements and Assume Guarantee Analysis & 8 \\ Simulation Input Analysis & 8 \\ Peer and Expert Review & 6 \\ Simulation Results & 5 \\ Static Analyses & 3 \\ Documentation & 3 \\ Tool Validation & 2 \\ Model Sufficiency Analyses & 3 \\ Numerical and Discrete Time Stability Analyses & 2 \\ STPA Tables & 1 \\ Computational Cost Analysis & 1 \\ Performance Analysis and Testing & 1 \\ Goals Left to Implementer to Satisfy & 1 \\ \end{tabular}
\end{table}
Table 3: Evidence and User-Answered Goal Types |
2310.17247 | Grokking Beyond Neural Networks: An Empirical Exploration with Model
Complexity | In some settings neural networks exhibit a phenomenon known as
\textit{grokking}, where they achieve perfect or near-perfect accuracy on the
validation set long after the same performance has been achieved on the
training set. In this paper, we discover that grokking is not limited to neural
networks but occurs in other settings such as Gaussian process (GP)
classification, GP regression, linear regression and Bayesian neural networks.
We also uncover a mechanism by which to induce grokking on algorithmic datasets
via the addition of dimensions containing spurious information. The presence of
the phenomenon in non-neural architectures shows that grokking is not
restricted to settings considered in current theoretical and empirical studies.
Instead, grokking may be possible in any model where solution search is guided
by complexity and error. | Jack Miller, Charles O'Neill, Thang Bui | 2023-10-26T08:47:42Z | http://arxiv.org/abs/2310.17247v2 | # Grokking Beyond Neural Networks: An Empirical Exploration with Model Complexity
###### Abstract
In some settings neural networks exhibit a phenomenon known as _grokking_, where they achieve perfect or near-perfect accuracy on the validation set long after the same performance has been achieved on the training set. In this paper, we discover that grokking is not limited to neural networks but occurs in other settings such as Gaussian process (GP) classification, GP regression and linear regression. We also uncover a mechanism by which to induce grokking on algorithmic datasets via the addition of dimensions containing spurious information. The presence of the phenomenon in non-neural architectures provides evidence that grokking is not specific to SGD or weight norm regularisation. Instead, grokking may be possible in any setting where solution search is guided by complexity and error. Based on this insight and further trends we see in the training trajectories of a Bayesian neural network (BNN) and GP regression model, we make progress towards a more general theory of grokking. Specifically, we hypothesise that the phenomenon is governed by the accessibility of certain regions in the error and complexity landscapes.
## 1 Introduction
Grokking is intimately linked with generalisation. The phenomenon is characterised by a relatively quick capacity to perform well on a necessarily narrow training set, followed by an increase in more general performance on a validation set. In this paper, we explore grokking with reference to existing theories of generalisation and model complexity. In the introduction, we discuss these theoretical points and review existing work on grokking. Afterwards in Section 2, we present some novel empirical observations. Reflecting on these observations in Section 3, we hypothesise a general mechanism which explains grokking in settings where model selection is guided by error and complexity. Finally, in Section 4 we consider the limits of empirical evidence presented and what directions might be fruitful for further grokking research.
### Generalisation
In abstract, generalisation is the capacity of a model to make good predictions in novel scenarios. To express this formally, we restrict our study of generalisation to supervised learning.
**Definition 1.1** (Supervised Learning).: In supervised machine learning, we are given a set of training examples \(X\), whose elements are members of \(\mathcal{X}\), and associated targets \(Y\), whose elements are members of \(\mathcal{Y}\). We then attempt to find a function \(f_{\theta}:\mathcal{X}\rightarrow\mathcal{Y}\) so as to minimise an objective function \(\mathcal{L}:(X,Y,f_{\theta})\rightarrow\mathbb{R}\).1
Footnote 1: A popular choice for \(f_{\theta}\) is a neural network (Goodfellow et al., 2016) with \(\theta\) representing its weights and biases.
Having found \(f_{\theta}\), we may look at the function's performance under a possibly new objective function \(\mathscr{G}\) (typically \(\mathcal{L}\) without additional complexity considerations) on an unseen set of examples and labels, \(X^{\prime}\) and \(Y^{\prime}\). If \(\mathscr{G}(X^{\prime},Y^{\prime})\) is small, we say that the model has generalised well and if it is large, it has generalised poorly.
### Model Selection and Complexity
Consider \(\mathscr{F}\), a set of models we wish to use for a prediction task. Presently, we have described a process by which to assess the generalisation performance of \(f_{\theta}\in\mathscr{F}\) given \(X^{\prime}\) and \(Y^{\prime}\). Unfortunately, these hidden examples and targets are not available during training. As such, we may want to measure relevant properties about members of \(\mathscr{F}\) that could be indicative of their capacity for generalisation. One obvious property is the value of \(\mathscr{G}(X,Y)\) (or some related function) which we call the _data fit_. Another property is _complexity_. If we can find some means to measure the complexity, then traditional thinking would recommend we follow the principle of parsimony.
**Definition 1.2** (Principle of Parsimony).: "[The principle of] parsimony is the concept that a model should be as simple as possible with respect to the induced variables, model structure, and number of parameters" (Burnham and Anderson, 2004). That is, if we have two models with similar data fit, we should choose the simplest of the two.
If we wish to follow both the principle of parsimony and minimise the data fit, the loss function \(\mathscr{L}\) we use for choosing a model is given by:
\[\mathscr{L}=\text{error}+\text{complexity}. \tag{1}\]
Suppose one is completing a regression task. In this case, if one substitutes error for mean squared error and complexity for the \(L_{2}\) norm of the weights, we arrive at perhaps the most widely used loss function in machine learning:
\[\mathcal{L}=\frac{1}{N}\sum_{i=1}^{N}||f_{\theta}(x_{i})-y_{i}||^{2}+\frac{ \beta}{M}\sum_{i=1}^{M}||\theta_{i}||^{2}. \tag{2}\]
where \(N\) is the number of data samples, \(M\) is the number of parameters and \(\beta\) is a hyperparameter controlling the contribution of the weight decay term.
While Equation 1 may seem simple, it is often difficult to characterise the complexity term2. Not only are there multiple definitions for model complexity across different model categories, definitions also change among the same category. When using a decision tree, we might measure complexity by tree depth and the number of leaf nodes (Hu et al., 2021). Alternatively, for deep neural networks, we could count the number and magnitude of parameters or use a more advanced measure such as the linear mapping number (LMN) (Liu et al., 2023). Fortunately, in this wide spectrum of complexity measures, there are some unifying formalisms we can apply. One such formalism was developed by Kolmogorov (Yueksel et al., 2020). In this formalism, we measure the complexity as the length of the minimal program required to generate a given model. Unfortunately, the difficulty of computing this measure makes it impractical.3 A more pragmatic alternative is the model description length. This defines the complexity of a model as the minimal message length required to communicate its parameters between two parties (Hinton and van Camp, 1993).
Footnote 2: There exist standard choices for \(\mathscr{G}(X,Y)\) such as the accuracy in classification or the \(L_{2}\) norm in regression
Footnote 3: We discuss the Kolmogorov complexity further in Appendix B since there are some connections between it, Bayesian inference and the minimum description length principle.
#### 1.2.1 Model Description Length
To understand model description length, we will consider two agents connected via a communication channel. One of these agents (Brian) is sending a model across the channel and the other (Oscar) is receiving the model. Both agree upon two items before communication. First, anything required to transmit the model with the parameters unspecified. For example, the software that implements the model, the training algorithm used to generate the model and the training examples (excluding the targets). Second, a prior distribution \(P\) over parameters in the model. After initial agreement on these items, Brian learns a set of model parameters \(\theta\) which are distributed according to \(Q\). The complexity of the model found by Brian is then given by the cost of "describing" the model over the channel to Oscar. Via the "bits back" argument (Hinton and van Camp, 1993), this cost is:
\[\mathcal{L}(f_{\theta})=D_{KL}(Q||P) \tag{3}\]
While the model description length is a relatively simple and quite general measure of model complexity, it relies upon a critical assumption. Namely, that the amount of information contained in the components agreed upon by Brian and Oscar should be small or shared among different models being compared. Fortunately, this is usually the case, especially in our experiments. When analysing the grokking phenomenon, we generally agree upon priors and look at the complexity of a model across epochs in optimisation. Thus, there is complete shared information cost outside of changes which occur during optimisation.
#### 1.2.2 Minimal Description Length
The model description length is often combined with a data fit term under the minimal description length principle or MDL (MacKay, 2003). One can understand this principle by considering the following scenario. Oscar does not know the training targets \(Y\) but would like to infer them from \(X\) to which he has access. To help Oscar, Brian transmits parameters \(\theta\) which Oscar then uses to run \(f_{\theta}\) on \(X\). However, the model is not perfect; so Brian must additionally send corrections to the model outputs. The combined message of the corrections and model, denoted \((D,f)\), has a total description length of:
\[\mathcal{L}(D,f_{\theta})=\mathcal{L}(f_{\theta})+\mathcal{L}(D|f_{\theta}). \tag{4}\]
In reference to Equation 1, the model description length is taking the role of the complexity term and the residuals are taking on the role of the data fit. The model which minimises \(\mathcal{L}(D,f_{\theta})\) is deemed optimal under the MDL principle (Hinton and van Camp, 1993) and also under our generalised objective function in Equation 1.
Notably, the MDL principle is equivalent to two widely used paradigms for model selection. The first is MAP estimation from Bayesian inference where \(\mathcal{L}(f_{\theta})\) comes to represent a prior over the parameters which specify \(f_{\theta}\)(MacKay, 2003). The second equivalence is with Equation 2. Indeed, if one calculates \(L(D,f_{\theta})\) under a Gaussian prior and posterior where the standard deviation of these distributions are fixed in advance, the model complexity term reduces to a squared weight penalty and the data fit is proportional to the mean squared error (Hinton and van Camp, 1993)4.
Footnote 4: See Equation 4 of Hinton and van Camp (1993)
#### 1.2.3 The Case of GP Regression
While the model description length is equivalent to many complexity measures used in model selection, it is not equivalent to all. For example, we complete some experimentation with GP regression where we observe grokking across optimisation of the kernel hyperparameters. The posterior distribution of these kernel parameters is given by:
\[p(\theta|X,y)=\frac{p(y|X,\theta)p(\theta)}{p(y|X)}=\frac{p(\theta)\int p(y|f, X)p(f|\theta)df}{p(y|X)} \tag{5}\]
From here we could use MAP-estimation to find \(\theta\) which we know is equivalent to the MDL. However, it is standard practice in GP optimisation to find \(\theta\) which maximises \(p(y|X,\theta)\)(Rasmussen and Williams, 2006). It turns out \(p(y|X,\theta)\) itself contains a regularising term which is often labelled the complexity5(Rasmussen and Williams, 2006):
Footnote 5: Note that in the equation, \(K_{\theta}=K_{f}+\sigma_{n}^{2}I\), which is the covariance function for targets with Gaussian noise of variance \(\sigma_{n}^{2}\).
\[\log p(y|X,\theta)=-\underbrace{\frac{1}{2}y^{T}K_{\theta}^{-1}y}_{\text{ data fit}}-\underbrace{\frac{1}{2}\log|K_{\theta}|}_{\text{complexity}}-\frac{\frac{n}{2}\log 2\pi}{\text{ normalisation}}. \tag{6}\]
In Equation 6, this so-called complexity penalty characterises "the volume of possible datasets that are compatible with the data fit term" (Bauer et al., 2016). This is clearly distinct from the model description length. However, Equation 6 is still congruent with Equation 1 and thus amenable to later analysis in this paper. As we will see, even though this definition of complexity is not equivalent to the description length, it seems to serve the same function empirically and is treated in the same way by our grokking hypothesis.
### The Grokking Phenomenon
The grokking phenomenon was recently discovered by Power et al. (2022) and has garnered attention from the machine learning community. We define the phenomenon in Definition 1.3.
**Definition 1.3** (Grokking).: Grokking is a phenomenon in which the performance of a model \(f_{\theta}\) on the training set reaches a low error at epoch \(E_{1}\), then following further optimisation, the model reaches a similarly low error on the validation dataset at epoch \(E_{2}\). Importantly the value \(\Delta_{k}=|E_{2}-E_{1}|\) must be non-trivial.
In many settings, the value of \(\Delta_{k}\) is much greater than \(E_{1}\). Additionally, the change from poor performance on the validation set to good performance can be quite sudden. A prototypical illustration of grokking is provided in Figure 13 (Appendix H.2).
Since initial publication by Power et al. (2022), there has been empirical and theoretical exploration of the phenomenon. In Section 1.3.1, we summarise the experimentation completed and in Section 1.3.2, we discuss the current theories of grokking. Due to the phenomenon's recent discovery, few theories exist which seek to explain it and those that do tend to focus on particular architectures. Further, the prevalence of grokking across the wide gamut of machine learning algorithms has not been thoroughly investigated.
#### 1.3.1 Architectures and Datasets
To the best of our knowledge, all notable existing empirical literature on the grokking phenomenon is summarised in Table 1. The literature has focused primarily on neural network architectures and algorithmic datasets6. No paper has yet demonstrated the existence of the phenomenon using a GP or linear regression. We believe that extension of the effect to these other machine learning models would be of interest to the community.
Footnote 6: We define an algorithmic dataset as one in which labels are produced via a predefined algorithmic process such as a mathematical operation between two integers.
#### 1.3.2 Theories of Grokking
Several theories have been presented to explain grokking. They can be categorised into two main classes based on the mechanism they use to analyse the phenomenon. Loss-based theories such as Liu et al. (2023b) appeal to the loss landscape of the training and test sets under different measures of complexity and data fit. Alternatively, representation-based theories such as Davies et al. (2023), Barak et al. (2022), Nanda et al. (2023) and Varma et al. (2023) claim that grokking occurs as a result of representation learning (or circuit formation) and associated training dynamics. The current set of theories tend to be narrow and thus might not be applicable if grokking were found in non-neural architectures.
**Loss based theory.**Liu et al. (2023b) assume that there is a spherical shell (Goldilocks zone) in the weight space where generalisation is better than outside the shell. They claim that, in a typical case of grokking, a model will have large weights and quickly reach an over-fitting solution outside of the Goldilocks zone. Then regularisation will slowly move weights towards the Goldilocks zone. That is, grokking occurs due to the mismatch in time between the discovery of the overfitting solution and the general solution. While some empirical evidence is presented in Liu et al. (2023b) for their theory, and the mechanism itself seems plausible, the requirement of a spherical Goldilocks zone seems too stringent. It may be the case that a more complicated weight-space geometry is at play in the case of grokking.
Liu et al. (2023a) also recently explored some cases of grokking using the LMN metric. They find that during periods they identify with generalisation, the LMN decreases. They claim that this decrease in LMN is responsible for grokking.
**Representation or circuit based theory.** Representation or circuit based theories require the emergence of certain general structures within neural architectures. These general structures become dominant in the network well after other less general ones are sufficient for low training loss. For example, Davies et al. (2023) claim that grokking occurs when, "slow patterns generalize well and are ultimately favoured by the training regime, but are preceded by faster patterns which generalise poorly." Similarly, it has been shown that stochastic gradient descent (SGD) slowly amplifies a sparse solution to algorithmic problems which is hidden to loss and error metrics (Barak et al., 2022). This is mirrored somewhat in the work of Liu et al. (2022) which looks to explain grokking via a slow increase in representation quality7. Nanda et al. (2023) claim in the setting of an algorithmic dataset and transformer architecture, training dynamics can be split into three phases based on the network's representations: memorisation, circuit formation and cleanup. Additionally, that the structured mechanisms (circuits) encoded in the weights are gradually amplified with later removal of memorising components. Varma et al. (2023) are in general agreement with Nanda et al. (2023). Representation (or circuit) theories seem the most popular, based on the number of research papers published which use these ideas. They also seem to have a decent empirical backing. For example, Nanda et al. (2023) explicitly discover circuits in a learning setting where grokking occurs.
Footnote 7: See Figure 1 of this paper for a high quality visualisation of what is meant by a general representation.
## 2 Experiments
In the following section, we present various empirical observations we have made regarding the grokking phenomenon. In Section 2.1, we demonstrate that grokking can occur with GP classification and linear regression. This proves its existence in non-neural architectures, identifying a need for a more general theory of the phenomenon. Further, in Section 2.2, we show that there is a way of inducing grokking via data augmentation. Finally, in Section 2.3, we examine directly the weight-space trajectories of models which grok during training, incidentally demonstrating the phenomenon in GP regression. Due to their number, the datasets used for these experiments are not described in the main text but rather in Appendix C.
### Grokking in GP Classification and Linear Regression
In the following experiments, we show that grokking occurs with GP classification and linear regression. In the case of linear regression, a very specific set of circumstances were required to induce grokking. However, for the GP models
tested using typical initialisation strategies, we were able to observe behaviours which satisfy Definition 1.3. Such findings demonstrate that grokking is a more general phenomenon than previously demonstrated in the literature.
#### 2.1.1 Zero-One Classification on a Slope with Linear Regression
As previously mentioned, to demonstrate the existence of grokking with linear regression, a very specific learning setting was required. We employed Dataset 9 with three additional spurious dimensions and two training points. Given a particular example \(x_{0}\) from the dataset, the spurious dimensions were added as follows to produce the new example \(x^{\prime}\):
\[x^{\prime}=\begin{bmatrix}x_{0}&x_{0}^{2}&x_{0}^{3}&\sin(100x_{0})\end{bmatrix}^ {T} \tag{7}\]
The model was trained as though the problem were a regression task with outputs later transformed into binary categories based on the sign of the predictions.8 To find the model weights, we used SGD over a standard loss function with a MSE data fit component and a scaled weight-decay complexity term9:
Footnote 8: If \(f_{\theta}(x)<0\) then the classification of a given point was negative and if \(f_{\theta}(x)>0\).
\[L=\frac{\text{MSE}(y,\hat{y})}{\epsilon_{0}}+\sum_{i=0}^{d}\frac{(w^{i}-\mu_{ 0}^{i})^{2}}{\sigma_{0}^{i}}. \tag{8}\]
In Equation 8, \(i\) indexes the dimensionality of the variables \(w_{0}\), \(\mu_{0}\) and \(\sigma_{0}\), \(\epsilon_{0}\) is the noise variance, \(\mu_{0}\) is the prior mean, \(\sigma_{0}\) is the prior variance, and \(w\) is the weight vector of the model. For our experiment, \(\mu_{i}\) was taken to be \(0\) and \(\sigma_{i}\) to be \(0.5\) for all \(i\). Regarding initialisation, \(w\) was heavily weighted against the first dimension of the input examples10. This unusual alteration to the initial weights was required for a clear demonstration of grokking with linear regression.
Footnote 10: As demonstrated by Hinton and van Camp (1993), this loss function has an equivalence with the MDL principle when we fix the standard deviations of prior and posterior Gaussian distributions.
The accuracy, complexity and data fit of the linear model under five random seeds governing dataset generation is shown in Figure 1. Clearly, in the region between epochs \(2\cdot 10^{1}\) and \(10^{3}\), validation accuracy was significantly worse than training accuracy and then, in the region \(2\cdot 10^{3}\), the validation accuracy was very similar to that of the training accuracy. This satisfies Definition 1.3, although the validation accuracy did not always reach \(100\%\) in every case. Provided in Appendix F.1 is an example of a training run with a specific seed and a clearer case of grokking with \(100\%\) accuracy.
#### 2.1.2 Zero-One Classification with a Gaussian Process
In our second learning scenario, we applied GP classification to Dataset 8 with a radial basis function (RBF) kernel:
\[k(x_{1},x_{2})=\alpha\exp\left(-\frac{1}{2}(x_{1}-x_{2})^{T}\Theta^{-2}(x_{1} -x_{2})\right). \tag{9}\]
Figure 1: Accuracy, data fit and complexity on zero-one slope classification task with a linear model. Note that the shaded region corresponds to the standard error of five training runs.
Here, \(\Theta\) is called the lengthscale parameter and \(\alpha\) is the kernel amplitude. Both \(\Theta\) and \(\alpha\) were found by minimising the approximate negative marginal log likelihood associated with a Bernoulli likelihood function via the Adam optimiser acting over the variational evidence lower bound (Hensman et al., 2015; Gardner et al., 2018).11
Footnote 11: A learning rate of \(10^{-2}\) was used.
The result of training the model using five random seeds for dataset generation and model initialisation can be seen in Figure 2. The final validation accuracy is not \(100\%\) like the cases we will see in the proceeding sections. However, it is sufficiently high to say that the model has grokked. In Appendix F.2, we also provide a plot of the model complexity as the KL divergence between the variational distribution and the prior for the training function values. As we later discuss in Section 4, there are issues with using this as a measure of complexity.
#### 2.1.3 Parity Prediction with a Gaussian Process
In our third learning scenario, we also looked at GP classification. However, this time on a more complex algorithmic dataset - a modified version of Dataset 1 (with \(k=3\)) where additional spurious dimensions are added and populated using values drawn from a normal distribution. In particular, the number of additional dimensions is \(n=37\) making the total input dimensionality \(d=40\). We use the same training set up as in Section 2.1.2. The results (again with five seeds) can be seen in Figure 3 with a complexity plot in Appendix F.3 and discussion of the limitations of this complexity measure in Section 4.
Figure 3: Accuracy and log likelihoods on hidden parity prediction task with RBF Gaussian process. Note that the shaded region corresponds to the standard error of five training runs. _Acc._ is _Accuracy_ and _Val._ is _Validation_.
Figure 2: Accuracy and log likelihoods on zero-one classification task with a RBF Gaussian process. Note that the shaded region corresponds to the standard error of five training runs.
### Inducing Grokking via Concealment
In this section, we investigate how one might augment a dataset to induce grokking. In particular, we develop a strategy which induces grokking on a range of algorithmic datasets. This work was inspired by Merrill et al. (2023) and Barak et al. (2022) where the true task is "hidden" in a higher dimensional space. This requires models to "learn" to ignore the additional dimensions of the input space. For an illustration of _learning to ignore_ see Figure 12 (Appendix H.1).
Our strategy is to extend this "concealment" idea to other algorithmic datasets. Consider \((x,y)\), an example and target pair in supervised learning. Under concealment, one augments the example \(x\) (of dimensionality \(d\)) by drawing \(v\sim\mathcal{N}(0,I_{l})\) and appending it to \(x\). The new concealed example \(x^{\prime}\) is:
\[x^{\prime}=\left[x_{0}\quad x_{1}\quad\cdots\quad x_{d}\quad v_{0}\quad\cdots \quad v_{l}\right]^{T}.\]
To determine the generality of this strategy in the algorithmic setting, we applied it to 6 different datasets (2-7). These datasets were chosen as they share a regular form12 and seem to cover a fairly diverse variety of algorithmic operations. In each case, we used the prime \(7\), and varied the additional dimensionality \(k\). For the model, we used a simple neural network analogous to that of Merrill et al. (2023). This neural network consisted of \(1\) hidden layer of size \(1000\) and was optimised using SGD with cross-entropy loss. The weight decay was set to \(10^{-2}\) and the learning rate to \(10^{-1}\).
Footnote 12: They are all governed by the same prime \(p\) and take two input numbers.
To discover the relationship between concealment and grokking, we measured the "grokking gap" \(\Delta_{k}\) as presented in Definition 1.3. In particular, we considered how an increase in the number of spurious dimensions relates to this gap. The algorithm used to run the experiment is detailed in Algorithm 1 (Appendix G.1). The result of running this algorithm can be seen in Figure 4. In addition to visual inspection of the data, a regression analysis was completed to determine whether the relationship between grokking gap and additional dimensionality might be exponential13. The result of this regression is denoted as _Regression Fit_ in the figure.
Footnote 13: The details of the regression are provided in Appendix D
The Pearson correlation coefficient (Pearson, 1895) was also calculated in log space for all points available and for each dataset individually. Further, we completed a test of the null hypothesis that the distributions underlying the samples are uncorrelated and normally distributed. The Pearson correlation \(r\) and \(p\)-values are presented in Table 2 (Appendix D.2) The Pearson correlation coefficients are high in aggregate and individually, indicating a positive linear trend in log space. Further, \(p\) values in both the aggregate and individual cases are well below the usual threshold of \(\alpha=0.05\).
### Parameter Space Trajectories of Grokking
Our last set of experiments was designed to interrogate the parameter space of models which grok. We completed this kind of interrogation in two different settings. The first was GP regression on Dataset 10 and the second was BNN
Figure 4: Relationship between grokking gap and number of additional dimensions using the grokking via concealment strategy. Note that \(x\)-values are artificially perturbed to allow for easier visibility of error bars. In reality they are either \(10\), \(20\), \(30\) or \(40\). Also, the data of zero additional length is removed (although still influences the regression fit). See Appendix G.2 for the plot without these changes.
classification on a concealed version of Dataset 1. Since the GP only had two hyperparameters governing the kernel, we could see directly the contribution of complexity and data fit terms. Alternatively, for the BNN we aggregated data regarding training trajectories across several initialisations to investigate the possible dynamics between complexity and data fit.
#### 2.3.1 GP Grokking on Sinusoidal Example
In this experiment, we applied a GP (with the same kernel as in Section 2.1.2) to regression of a sine wave. To find the optimal parameters for the kernel, a Gaussian likelihood function was employed with exact computation of the marginal log likelihood. In this optimisation scenario, the complexity term is as described in Section 1.2.3.
To see how grokking might be related to the complexity and data fit landscapes, we altered parameter initialisations. We considered three different initialisation types. In case A, we started regression in a region of high error and low complexity (HELC) where a region of low error and high complexity (LEHC) was relatively inaccessible when compared a region of low error and low complexity (LELC). For case B, we initialised the model in a region of LEHC where LELC solutions were less accessible. Finally, in case C, we initialised the model in a region of LELC.
As evident in Figure 5, we only saw grokking for case B. It is interesting that, in this GP regression case, we did not see a clear example of the spherical geometry mentioned in the Goldilocks zone theory of Liu et al. (2023b). Instead a more complicated loss surface is present which results in grokking.
#### 2.3.2 Trajectories of a BNN with Parity Prediction
We also examined the weight-space trajectories of a BNN (\(f_{\theta}\)). Our learning scenario involved Dataset 1 with the concealment strategy presented in Section 2.2. Specifically, we used an additional dimensionality of \(27\) and a parity length of \(3\). To train the model, we employed SGD with the following variational objective:
\[\mathcal{L}(\phi)=\mathbb{E}_{Q_{\phi}(\theta)}[\text{CrossEntropyLoss}(f_{ \theta}(X),Y)]+D_{KL}(Q_{\phi}(\theta)||P(\theta)) \tag{10}\]
In Equation 10, \(P(\theta)\) is a standard Gaussian prior on the weights and \(Q_{\phi}(\theta)\) is the variational approximation. The complexity penalty in this case is exactly the model description length discussed in Section 1.2 with the overall loss function clearly a subset of Equation 1.
Figure 5: Trajectories through parameter landscape for GP regression. Initialisation points A-C refer to those mentioned in Section 2.3.1.
To explore the weight-space trajectories of the BNN we altered the network's initialisation by changing the standard deviation of the normal distribution used to seed the variational mean of the weights. This resulted in network initialisations with differing initial complexity and error. We then trained the network based on these initialisations using three random seeds, recording values of complexity, error and accuracy. The outcomes of this process are in Figure 6. Notably, initialisations which resulted in an increased grokking gap correlate with increased optimisation time in regions of LEHC. Further, there seems to be a trend across epochs with error and complexity. At first, there is a significant decrease in error followed by a decrease in complexity.
## 3 Grokking and Complexity
Thus far we have explored grokking with reference to different complexity measures across a range of models. We have found the existence of the phenomenon in GP classification and regression, linear regression and BNNs. We have identified a means to induce grokking via the addition of spurious dimensions. Finally, we have analysed the trajectories of a GP and BNN during training, observing trends associated with the complexity and error of the models. Noting the discussion in Section 1.3.2, there seems to be no theory of grokking in the literature which can explain the new empirical evidence we present. Motivated by this, we construct a new hypothesis which fills this gap. We believe this hypothesis to be compatible with our new results, previous empirical observations and with many previous theories of the grokking phenomenon.
To build the hypothesis, we first make Assumption 1.14 We then posit Claim 1, our hypothesis of grokking, which we sometimes refer to as the _complexity theory of grokking_.
Footnote 14: We believe that Assumption 1 is justified for the most common setting in which grokking occurs. Namely, algorithmic datasets. It is also likely true for a wide range of other scenarios (see Section 1.2).
**Assumption 1**.: _For the task of interest, the principle of parsimony holds. That is, solutions with minimal possible complexity will generalise better._
**Claim 1**.: _If the low error, high complexity (LEHC) weight space is readily accessible from typical initialisation but the low error, low complexity (LELC) weight space is not, models will quickly find a low error solution which does not generalise. Given a path between LEHC and LELC regions which has non-increasing loss, solutions in regions of LEHC will slowly be guided toward regions of LELC due to regularisation. This causes an eventual decrease in validation error, which we see as grokking._
### Explanation of Previous Empirical Results
In the following subsection, we demonstrate the congruence between our hypothesis and existing empirical observations. In Appendix E we draw parallels between our work and existing theory.
Figure 6: Grokking with BNN using different standard deviation values (\(\sigma\)) for the variational mean initialisations. _Normalised Grokking Gap_ refers to the difference in epoch between high training accuracy and high validation accuracy normalised between zero and 1. In the left-hand plot, the transparency of a point indicates at which epoch it was recorded (more transparent equates to further in training) and triangles of a particular colour correspond to the point on the error-complexity landscape at which a model with a particular initialisation exhibited grokking. Note that three trials were run per \(\sigma\) initialisation.
Learning with algorithmic datasets benefits from the principle of parsimony as a small encoding is required for the solution. In addition, when learning on these datasets, there appear to be many other more complex solutions which do not generalise but attain low training error. For example, with a neural network containing one hidden layer completing a parity prediction problem, there is competition between dense subnetworks which are used to achieve high accuracy on the training set (LEHC) and sparse subnetworks (LELC) which have better generalisation performance (Merrill et al., 2023). In this case, the reduced accessibility of LELC regions compared to LEHC regions seems to cause the grokking phenomenon. This general story is supported by further empirical analysis completed by Liu et al. (2022) and Nanda et al. (2023). Liu et al. (2022) found that a less accessible, but more general representation, emerges over time within the neural network they studied and that after this representation's emergence, grokking occurs. Nanda et al. (2023) discovered that a set of trigonometric identities were employed by a transformer to encode an algorithm for solving modular arithmetic. Additionally, this trigonometric solution was gradually amplified over time with the later removal of high complexity "memorising" structures. In this case, the model is moving from an accessible LEHC region where memorising solutions exist to a LELC region.
The work by Liu et al. (2023b) showed the existence of grokking on non-algorithmic datasets via alteration of the initialisation and dataset size. From Claim 1, we can see why these factors would alter the existence of grokking. Changing the initialisation alters the relative accessibility of LEHC and LELC regions and reducing the dataset size may lessen constraints on LEHC regions which otherwise do not exist.
### Explanation of New Empirical Evidence
Having been proposed to explain the empirical observation we have uncovered in this paper, Claim 1 should be congruent with these new findings - the first of which is the existence of grokking in non-neural models. Indeed, one corollary of our theory (Corollary 1) is that grokking should be model agnostic. This is because the proposed mechanism only requires certain properties of error and complexity landscapes during optimisation. It is blind to the specific architecture over which optimisation occurs.
**Corollary 1**.: _The phenomenon of grokking should be model agnostic. Namely, it could occur in any setting in which solution search is guided by complexity and error._
Another finding from this paper is that of the concealment data augmentation strategy. We believe this can be explained via the lens of Claim 1 as follows. When dimensions are added with uninformative features, there exist LEHC solutions which use these features. However, the number of LELC solutions remains relatively low as the most general solution should have no dependence on the additional components. This leads to an increase in the relative accessibility of LEHC regions when compared to LELC regions which in turn leads to grokking.
## 4 Discussion
Despite some progress made toward understanding the grokking phenomenon in this paper, there is still some points to discuss. We should start by assessing the limitations of the empirical evidence gathered. This is important for a balanced picture of the experimentation completed and its implications for our grokking hypothesis. Having examined these limitations, we can provide some recommendations regarding related future work in the field.
### Limitations of Empirical Evidence
In Section 2.1, experimentation with linear regression may be criticised for the specificity of the learning setup required to demonstrate grokking and for the value of the final validation accuracy. We note that the first critique is not reasonable in the sense that we should be able to show grokking under "normal" circumstances since grokking does not appear under "normal circumstances." However, if one wanted to show that Claim 1 is a general theory of grokking, we should be able to see it at work in any learning setting which exhibits grokking. It could be the case that, with only one setting, we saw results consistent with Claim 1 but under another learning scenario, our claim could be proven false. We do not consider the second critique to be significant. For our purposes, grokking need not have \(100\%\) accuracy as not all general solutions provide that. However, if this is desired, we provide a case where this occurs in Appendix F.1.
There are also reasonable critiques concerning the experimentation completed on GP classification. The most pressing might be concerns over the measurement of complexity as presented in Appendices F.2 and F.3. This is due to the way the model is optimised. Namely, via maximisation of the evidence lower bound:
\[\mathcal{L}_{\text{ELBO}}(\phi,\theta)=\sum_{i=1}^{N}\mathbb{E}_{q_{\phi}(f_{ i})}[\log p(y_{i}|f_{i})]-\beta\text{KL}[q_{\phi}(f)||p_{\theta}(f)]. \tag{11}\]
Unfortunately, optimisation of this value leads to changes in both the variational approximation and the hyperparameters of the prior GP. This presents a problem when trying to use the results of GP classification to validate Claim 1. The hyperparameters control the complexity of the prior which then influences the measured complexity of the model via the KL divergence. Consequently, the complexity measurement at any two points in training are not necessarily comparable. To disentangle optimisation of the hyperparameters and the variational approximation, one could complete a set of ablation studies. For this, one would keep either the hyperparameters or the variational approximation constant and alter the other variable. By doing so, one would be able to validate more directly Claim 1 with GP classification. Additionally, one might need to alter the learning setting to retain grokking under a new approximation scheme such as Laplace's method. Further discussion and experimentation are provided in Appendix I.
Due to the simplicity of the model considered in Figure 5, it is hard to criticise experimentation completed there. However, the experimental design of the BNN lacks generality. Indeed, it is difficult to know if the subspace from which the BNN was initialised is indicative of general trends about the weight space. However, the values chosen were indicative of typical initialisation values. Thus, we can say that for "normal" cases that might be encountered by a practitioner, the BNN weight trajectories are representative.
### Future Work
An interesting outcome of experimentation in this paper was the discovery of the concealment data augmentation strategy. As far as the authors are aware, this is the first data augmentation strategy found which consistently results in grokking. Additionally, its likely exponential trend with the degree of grokking is of great interest. Indeed, we know that the volume of a region in an \(n\)-dimensional space decreases exponentially with an increase in \(n\). This fact and the exponential increase in grokking with additional dimensionality could be connected. Unfortunately, at this point in time, such a connection is only speculative. Thus, a more theoretical analysis might be warranted which seeks to examine this relationship.
## 5 Conclusion
We have presented novel empirical evidence for the existence of grokking in non-neural architectures and discovered a data augmentation technique which induces the phenomenon. Relying upon these observations and analysis of training trajectories in a GP and BNN, we proposed an effective theory of grokking. Importantly, we argued that this theory is congruent with previous empirical evidence and many previous theories of grokking. In future, researchers could extend the ideas in this paper by undertaking a theoretical analysis of the concealment strategy discovered and by testing the theory in non-algorithmic datasets.
#### Acknowledgements
We would like to acknowledge Russell Tsuchida, Matthew Ashman, Rohin Shah and Yuan-Sen Ting for their useful feedback on the content of this paper.
#### Supporting Information
All experiments can be found at this GitHub page. They have descriptive names and should reproduce the figures seen in this paper. For Figure 6, the relevant experiment is in the feat/info-theory-description branch. |
2310.18612 | Efficient kernel surrogates for neural network-based regression | Despite their immense promise in performing a variety of learning tasks, a
theoretical understanding of the limitations of Deep Neural Networks (DNNs) has
so far eluded practitioners. This is partly due to the inability to determine
the closed forms of the learned functions, making it harder to study their
generalization properties on unseen datasets. Recent work has shown that
randomly initialized DNNs in the infinite width limit converge to kernel
machines relying on a Neural Tangent Kernel (NTK) with known closed form. These
results suggest, and experimental evidence corroborates, that empirical kernel
machines can also act as surrogates for finite width DNNs. The high
computational cost of assembling the full NTK, however, makes this approach
infeasible in practice, motivating the need for low-cost approximations. In the
current work, we study the performance of the Conjugate Kernel (CK), an
efficient approximation to the NTK that has been observed to yield fairly
similar results. For the regression problem of smooth functions and logistic
regression classification, we show that the CK performance is only marginally
worse than that of the NTK and, in certain cases, is shown to be superior. In
particular, we establish bounds for the relative test losses, verify them with
numerical tests, and identify the regularity of the kernel as the key
determinant of performance. In addition to providing a theoretical grounding
for using CKs instead of NTKs, our framework suggests a recipe for improving
DNN accuracy inexpensively. We present a demonstration of this on the
foundation model GPT-2 by comparing its performance on a classification task
using a conventional approach and our prescription. We also show how our
approach can be used to improve physics-informed operator network training for
regression tasks as well as convolutional neural network training for vision
classification tasks. | Saad Qadeer, Andrew Engel, Amanda Howard, Adam Tsou, Max Vargas, Panos Stinis, Tony Chiang | 2023-10-28T06:41:47Z | http://arxiv.org/abs/2310.18612v2 | # Efficient kernel surrogates for neural network-based regression
###### Abstract
Despite their immense promise in performing a variety of learning tasks, a theoretical understanding of the effectiveness and limitations of Deep Neural Networks (DNNs) has so far eluded practitioners. This is partly due to the inability to determine the closed forms of the learned functions, making it harder to assess their precise dependence on the training data and to study their generalization properties on unseen datasets. Recent work has shown that randomly initialized DNNs in the infinite width limit converge to kernel machines relying on a Neural Tangent Kernel (NTK) with known closed form. These results suggest, and experimental evidence corroborates, that empirical kernel machines can also act as surrogates for finite width DNNs. The high computational cost of assembling the full NTK, however, makes this approach infeasible in practice, motivating the need for low-cost approximations. In the current work, we study the performance of the Conjugate Kernel (CK), an efficient approximation to the NTK that has been observed to yield fairly similar results. For the regression problem of smooth functions and classification using logistic regression, we show that the CK performance is only marginally worse than that of the NTK and, in certain cases, is shown to be superior. In particular, we establish bounds for the relative test losses, verify them with numerical tests, and identify the regularity of the kernel as the key determinant of performance. In addition to providing a theoretical grounding for using CKs instead of NTKs, our framework provides insights into understanding the robustness of the various approximants and suggests a recipe for improving DNN accuracy inexpensively. We present a demonstration of this on the foundation model GPT-2 by comparing its performance on a classification task using a conventional approach and our prescription.
**Keywords:** Finite-width neural networks, Empirical kernel machines, Neural Tangent Kernel, Function regression, Logistic regression, Generalization errors, Foundation models
## 1 Introduction
Deep Neural Networks (DNNs) have shown immense promise in performing a variety of learning tasks including, among others, image classification [1, 2], natural language processing [3, 4, 5, 6], function approximation [7, 8], solving differential equations [9, 10, 11], etc. However, a theoretical understanding of their effectiveness and limitations has proven elusive, thereby hindering their universal acceptance in practical applications and limiting their usage in scientific computing. This is partly down to their somewhat unwieldy architectures and complex loss landscapes that do not permit precise closed-form characterizations of the optimally learned functions, thus making it harder to assess their precise dependence on the training data and to study their generalization properties on test sets.
Recent work has established that randomly initialized DNNs in the infinite width limit converge to kernel machines relying on a deterministic Neural Tangent Kernel (NTK) [12, 13]. The NTK for a DNN,
defined as the Gram matrix of the Jacobian of the DNN with respect to the network parameters, controls the evolution of the DNN training [14]. In the infinite width limit, the NTK can be shown to not deviate from its known closed form. As a consequence, the result of fully training an infinite width DNN can be _a priori_ identified as a kernel machine employing the known NTK. These insights also allow one to study the performance of the DNN architecture in the extremal over-parameterized limit and benefit from the double-descent phenomenon [15, 16, 17].
It should be emphasized that this limiting regime does not leverage the changes in the DNN during training and hence fails to make use of any features learned by the architecture. However, these results suggest the tantalizing possibility that empirical kernel machines could also act as surrogates for finite width DNNs. As shown elsewhere [18] as well as later in the article, under certain assumptions this claim is not only corroborated by strong evidence, both mathematical (equation (21) and Sections 5 and 4) and experimental (Section 6), but also considerably improved upon. This enables a precise determination of the superior learned functions and a thorough study of their performance on unseen data.
The high computational cost of assembling the complete NTK for DNNs employed in practical applications [19, 20], however, makes this approach infeasible, motivating the need for low cost approximations. In this article, we study the performance of the Conjugate Kernel (CK) [21], a "zeroth-order" approximation to the NTK, that has been observed to yield fairly similar results (see Section 6). For the problems of regression of smooth functions and logistic regression, we prove that the CK approximations are only marginally worse than the NTK approximations and, in some cases, are much superior. In particular, we establish bounds for the errors, verify them with numerical tests, and identify the regularity of the kernel as the key determinant of performance. In addition to providing a theoretical grounding for using CKs instead of NTKs, our framework provides insights into understanding the robustness of the various approximations and suggests a recipe for boosting DNN accuracy inexpensively. We duly present a demonstration of this on GPT-2 by comparing its performance on a classification task using a conventional approach and our prescription.
## 2 Preliminaries
In this section, we introduce our notation and the define the basic terms and tools. We denote by \(f_{\mathrm{NN}}:\mathbb{R}^{d_{0}}\rightarrow\mathbb{R}^{d_{L+1}}\) a fully connected network (FCN) with \(L\) hidden layers of the form
\[\mathbf{z}^{(l)} = \sigma\left(W^{(l)}\mathbf{z}^{(l-1)}+\mathbf{b}^{(l)}\right),\quad 1 \leq l\leq L,\] \[\mathbf{z}^{(L+1)} = W^{(L+1)}\mathbf{z}^{(L)}+\mathbf{b}^{(L+1)}. \tag{1}\]
Here, \(\sigma\) is the activation function, \(W^{(l)}\in\mathbb{R}^{d_{l}\times d_{l-1}}\) and \(\mathbf{b}^{(l)}\in\mathbb{R}^{d_{l}}\) are the trainable parameters, \(\mathbf{z}^{(0)}\in Z\) is the input, and \(\mathbf{z}^{(L+1)}\in\mathbb{R}^{d_{L+1}}\) is the output, i.e., \(f_{\mathrm{NN}}\left(\mathbf{z}^{(0)}\right)=\mathbf{z}^{(L+1)}\). We denote all the trainable parameters \(\{W^{(l)},\mathbf{b}^{(l)}\}_{1\leq l\leq L+1}\) by \(\theta\) and their number by \(|\theta|\) for brevity, and indicate the dependence of the neural network on them by writing \(f_{\mathrm{NN}}^{\theta}\). Since this is the only type of neural network (NN) we shall consider in this article, henceforth we refer to FCNs of this type as NNs.
Architectures of this type can be deployed for various tasks. For instance, let \(f:Z\rightarrow\mathbb{R}\) be a given function, where \(Z\) is a compact subset of \(\mathbb{R}^{d_{0}}\), with training input-output pairs \(\{(\mathbf{x}_{i},y_{i})\}_{0\leq i\leq N}\subset Z\times\mathbb{R}\) (so that \(y_{i}=f(\mathbf{x}_{i})\)). In order to solve the function regression problem, the parameters can be trained so as to minimize the weighted squared loss
\[L_{0,\mathrm{NN}}\left(\theta\right)=\sum_{i=0}^{N}w_{i}\left|y_{i}-f_{\mathrm{ NN}}^{\theta}\left(\mathbf{x}_{i}\right)\right|^{2}, \tag{2}\]
for some non-negative weights \(\{w_{i}\}_{0\leq i\leq N}\), and yield an approximation \(f_{\mathrm{NN}}^{\theta^{*}}\) to the target function. We further define the norms
\[\left\|g\right\|_{0}:=\left(\sum_{i=0}^{N}w_{i}\left|g(\mathbf{x}_{i})\right|^{2} \right)^{1/2},\qquad\left\|g\right\|_{1}:=\left(\sum_{i=0}^{M}v_{i}\left|g(\mathbf{ y}_{i})\right|^{2}\right)^{1/2} \tag{3}\]
for any \(g:Z\rightarrow\mathbb{R}^{d_{L+1}}\), where \(\{(\mathbf{y}_{i},v_{i})\}_{0\leq i\leq M}\) is a collection of test nodes and associated non-negative weights. The training and test errors are then \(\left\|f-f_{\mathrm{NN}}^{\theta^{*}}\right\|_{0}=\left[L(\theta^{*})\right]^{ 1/2}\) and \(\left\|f-f_{\mathrm{NN}}^{\theta^{*}}\right\|_{1}\) respectively.
Similarly, we can employ NNs for classifications tasks. Let \(\{(\mathbf{x}_{i},\chi_{i})\}_{0\leq i\leq N}\subset Z\times\{0,1\}\) be the training set, where \(\chi_{i}\) is the class label assigned to \(\mathbf{x}_{i}\). Set \(d_{L+1}=2\) (so that \(f_{\mathrm{NN}}^{\theta}:\mathbb{R}^{d_{0}}\rightarrow\mathbb{R}^{2}\)) and apply the softmax function to get
\[p_{\mathrm{NN}}^{\theta}(\mathbf{z})=\frac{\exp\left(f_{\mathrm{NN},1}^{\theta}( \mathbf{z})\right)}{\sum_{i=1}^{2}\exp\left(f_{\mathrm{NN},i}^{\theta}(\mathbf{z}) \right)}=\frac{1}{1+\exp\left[-\left(f_{\mathrm{NN},1}^{\theta}(\mathbf{z})-f_{ \mathrm{NN},2}^{\theta}(\mathbf{z})\right)\right]}. \tag{4}\]
This is the likelihood, according to this model, of \(\mathbf{z}\in Z\) belonging to the class labelled \(1\). The training requires the minimization of the logistic loss (i.e., the cross-entropy)
\[\ell_{0,\mathrm{NN}}(\theta)=-\sum_{i=0}^{N}\chi_{i}\ln\left(p_{\mathrm{NN}}^ {\theta}(\mathbf{x}_{i})\right)+(1-\chi_{i})\ln\left(1-p_{\mathrm{NN}}^{\theta}( \mathbf{x}_{i})\right), \tag{5}\]
with the corresponding test loss
\[\ell_{1,\mathrm{NN}}(\theta)=-\sum_{i=0}^{M}\mu_{i}\ln\left(p_{\mathrm{NN}}^ {\theta}(\mathbf{y}_{i})\right)+(1-\mu_{i})\ln\left(1-p_{\mathrm{NN}}^{\theta}(\bm {y}_{i})\right), \tag{6}\]
where \(\{\mu_{i}\}_{0\leq i\leq M}\subset\{0,1\}\) are the class labels of the test nodes.
These tasks can also be performed with the aid of kernel methods [22, 23]. Given a valid kernel function \(\ker:Z\times Z\rightarrow\mathbb{R}\), define the kernel matrix \(H_{\ker}=\left(\ker(\mathbf{x}_{i},\mathbf{x}_{j})\right)_{0\leq i,j\leq N}\), and the weighted kernel matrix \(\widehat{H}_{\ker}=W^{1/2}H_{\ker}W^{1/2}\), where \(W^{1/2}=\mathrm{diag}(\sqrt{w_{0}},\ldots,\sqrt{w_{N}})\). The corresponding kernel approximation for the function regression problem above seeks \(\mathbf{\delta}^{*,\ker}\in\mathbb{R}^{N+1}\) to minimize
\[L_{0,\ker}(\mathbf{\delta})=\sum_{i=0}^{N}w_{i}\left|y_{i}-\sum_{j=0}^{N}\mathbf{ \delta}_{j}\ker(\mathbf{x}_{j},\mathbf{x}_{i})\right|^{2}. \tag{7}\]
It possesses the closed form solution
\[f_{\ker}(\mathbf{z})=\mathbf{y}^{\top}W^{1/2}\left(\widehat{H}_{\ker}\right)^{\dagger }W^{1/2}\left(\ker(\mathbf{x}_{j},\mathbf{z})\right)_{0\leq j\leq N}, \tag{8}\]
where \(\mathbf{y}=\left(y_{i}\right)_{0\leq i\leq N}\) contains the training target values, with training and test losses \(\left\|f-f_{\ker}\right\|_{0}\) and \(\left\|f-f_{\ker}\right\|_{1}\) respectively.
The logistic regression problem can likewise be solved by finding an \(\mathbf{\alpha}^{*,\ker}\in\mathbb{R}^{N+1}\) that minimizes
\[\ell_{0,\ker}(\mathbf{\alpha})=\sum_{i=0}^{N}\chi_{i}\ln\left[1+\exp(-H_{\ker,(i,:)}\mathbf{\alpha})\right]+(1-\chi_{i})\ln\left[1+\exp(H_{\ker,(i,:)}\mathbf{\alpha} )\right], \tag{9}\]
thus yielding the likelihood
\[p_{\ker}(\mathbf{z})=\left[1+\exp\left(-\sum_{j=0}^{N}\ker(\mathbf{z},\mathbf{x}_{j})\bm {\alpha}_{j}^{*,\ker}\right)\right]^{-1}, \tag{10}\]
according to this classifier, of \(\mathbf{z}\in Z\) lying in class \(1\). The test loss \(\ell_{1,\ker}\) is defined similarly to (6) with (4) replaced by (10).
In case \(\ker\) is induced by a feature map \(\Phi_{\ker}:Z\rightarrow\mathbb{R}^{K}\), so that \(\ker(\mathbf{x},\mathbf{z})=\Phi_{\ker}(\mathbf{x})^{\top}\Phi_{\ker}(\mathbf{z})\), (8) can be understood as an orthogonal projection on \(\mathcal{S}_{\ker}=\mathrm{span}\left(\{\Phi_{\ker,k}\}_{k=1}^{K}\right)\) with respect to \(\langle\cdot,\cdot\rangle_{0}\), the inner product that induces \(\left\|\cdot\right\|_{0}\); a technique for obtaining an optimal representation of this projection is presented in Appendix B. Equivalently, we can interpret (8) as weighted linear regression on the data-points \(\left\{(\Phi_{\ker}(\mathbf{x}_{i}),y_{i})\right\}_{0\leq i\leq N}\), i.e., it implicitly finds \(\mathbf{\gamma}^{*,\ker}\in\mathbb{R}^{K}\) that minimizes the training loss
\[\widetilde{L}_{0,\ker}(\mathbf{\gamma})=\sum_{i=0}^{N}w_{i}\left|y_{i}-\mathbf{\gamma}^ {\top}\Phi_{\ker}(\mathbf{x}_{i})\right|^{2}, \tag{11}\]
so the approximation (8) can also be written as
\[f_{\rm ker}(\mathbf{z})=\left(\mathbf{\gamma}^{*,{\rm ker}}\right)^{\top}\Phi_{\rm ker}( \mathbf{z}). \tag{12}\]
This equivalence between the apparently more general form \(\mathbf{\gamma}^{\top}\Phi_{\rm ker}(\mathbf{z})\) in (12) and the kernel form \(\mathbf{\delta}^{\top}\left(\ker(\mathbf{x}_{j},\mathbf{z})\right)_{0\leq j\leq N}\) in (8) can be viewed as a consequence of the representer theorem [24] or, equivalently, as a corollary of the (easily proven) fact that
\[{\rm range}(A^{\top}A)={\rm range}(A^{\top}), \tag{13}\]
for any matrix \(A\): applied to the weighted feature map matrix \(\widehat{\Xi}_{\rm ker}=\left(\Phi_{{\rm ker},m}(\mathbf{x}_{i})w_{i}^{1/2}\right)_ {1\leq m\leq K,0\leq i\leq N}\in\mathbb{R}^{K\times(N+1)}\), we deduce that for an optimal \(\mathbf{\gamma}^{*,{\rm ker}}\), there exists some \(\mathbf{\delta}^{*,{\rm ker}}\in\mathbb{R}^{N+1}\) such that
\[\widehat{\Xi}_{\rm ker}^{\top}\mathbf{\gamma}^{*,{\rm ker}}=\widehat{\Xi}_{\rm ker }^{\top}\widehat{\Xi}_{\rm ker}\mathbf{\delta}^{*,{\rm ker}}=\widehat{H}_{\rm ker }\mathbf{\delta}^{*,{\rm ker}}, \tag{14}\]
since \(\widehat{H}_{\rm ker}=\widehat{\Xi}_{\rm ker}^{\top}\widehat{\Xi}_{\rm ker}\), and hence that we can restrict our search for \(\mathbf{\gamma}^{*,{\rm ker}}\) in \({\rm range}\left(\widehat{\Xi}_{\rm ker}\right)\). This implies that
\[\left\|f-f_{\rm ker}\right\|_{\nu}^{2}=\widetilde{L}_{\nu,{\rm ker}}(\mathbf{ \gamma}^{*,{\rm ker}}), \tag{15}\]
for \(\nu=0,1\), where the test loss \(\widetilde{L}_{1,{\rm ker}}\) is defined in the same manner as (11) with the test points and weights replacing the training ones.
Similarly, solving the kernel logistic problem can be interpreted as a linear classifier in the feature space, i.e., identifying the decision boundary
\[\left\{\mathbf{x}\in\mathbb{R}^{d_{0}}:\left(\mathbf{\beta}^{*,{\rm ker}}\right)^{\top }\Phi_{\rm ker}(\mathbf{x})=0\right\}, \tag{16}\]
which can be seen as a hyperplane in \(\mathbb{R}^{K}\), by finding \(\mathbf{\beta}^{*,{\rm ker}}\in\mathbb{R}^{K}\) that minimizes
\[\widetilde{\ell}_{0,{\rm ker}}(\mathbf{\beta})=\sum_{i=0}^{N}\chi_{i}\ln\left[1+ \exp\left(-\mathbf{\beta}^{\top}\Phi_{\rm ker}(\mathbf{x}_{i})\right)\right]+(1-\chi_ {i})\ln\left[1+\exp\left(\mathbf{\beta}^{\top}\Phi_{\rm ker}(\mathbf{x}_{i})\right) \right]. \tag{17}\]
Following an argument similar to that used above, we have
\[\ell_{\nu,{\rm ker}}(\mathbf{\alpha}^{*,{\rm ker}})=\widetilde{\ell}_{\nu,{\rm ker }}(\mathbf{\beta}^{*,{\rm ker}}), \tag{18}\]
for \(\nu=0,1\).
## 3 Problem formulation
Given a (fully or partially trained or untrained) neural network \(f_{\rm NN}:\mathbb{R}^{d_{0}}\to\mathbb{R}\), the corresponding Neural Tangent Kernel (NTK) is the kernel induced by the feature map
\[\Phi_{{\rm NTK},k}:=\frac{\partial f_{\rm NN}^{\theta}}{\partial\theta_{k}}, \quad 1\leq k\leq|\theta|, \tag{19}\]
i.e., the Jacobian of the neural network with respect to the trainable parameters [14]. In the infinite width limit, it has been established that, over the course of training a randomly initialized NN, the NTK possesses an unchanging closed form, and that the NN converges to the corresponding NTK machine [12]. This highlights the linear dependence of the NN on the underlying parameters, to wit,
\[f_{\rm NN}^{\theta}=f_{\rm NN}^{\theta_{0}}+(\theta-\theta_{0})^{\top}\left. \left(\frac{\partial f_{\rm NN}^{\theta}}{\partial\theta_{k}}\right)\right|_ {\theta=\theta_{0}}. \tag{20}\]
In fact, since \(f_{\rm NN}^{\theta_{0}}\) is a linear combination of the neurons in the last hidden layer (from (1)) and which correspond the Jacobian entries corresponding to the last layer of parameters, it can be subsumed in the second term in (20) to yield
\[f_{\rm NN}^{\theta}=\mathbf{\gamma}^{\top}\Phi_{\rm NTK} \tag{21}\]
for some \(\mathbf{\gamma}\in\mathbb{R}^{|\theta|}\). Since this is precisely the form of an NTK approximation (from the equivalence of (8) and (12)), we deduce that, in the infinite width limit, a fully trained NN is identical to an optimal regressor employing the NTK.
However, using this insight to obtain performance guarantees and assess limitations of finite width network runs into two problems. First, the assumption of linear dependence on parameters in (20) does not hold [25], so the optimal NTK machine can only be viewed as the best approximator in the tangent space of the NN at \(\theta_{0}\), and not on the manifold of all NNs. Secondly, computing the NTK in practice is a computationally daunting task [19, 20]. In the infinite width regime and under random initialization, a closed form for the NTK can be calculated. However, empirical kernels do not possess such characterizations in general, necessitating the use of low-cost approximations.
The Conjugate Kernel (CK) is defined as the kernel induced by the Jacobian only with respect to the parameters \(\{W^{(L+1)},\mathbf{b}^{(L+1)}\}\) in the last layer [21]. Consequently, its feature map can be seen as a truncation of the full Jacobian employed by the NTK. Observe that we have set \(d_{L+1}=1\) so \(W^{(L+1)}\in\mathbb{R}^{d_{L}}\) and \(\mathbf{b}^{L+1}\in\mathbb{R}\), and hence
\[\Phi_{\rm CK,k}:=\begin{cases}\frac{\partial f_{\rm NN}^{\theta_{ \rm NN}^{\theta}}}{\partial W_{k}^{(L+1)}},&\text{if }1\leq k\leq d_{L},\\ \frac{\partial f_{\rm NN}^{\theta}}{\partial\mathbf{b}^{(L+1)}},&\text{if }k =d_{L}+1.\end{cases} \tag{22}\]
A moment's thought then reveals that
\[\mathrm{CK}(\mathbf{z},\widetilde{\mathbf{z}}):=\Phi_{\rm CK}(\mathbf{z})^{ \top}\Phi_{\rm CK}(\widetilde{\mathbf{z}})=1+\mathbf{z}^{(L)}\cdot\widetilde{\mathbf{z}}^ {(L)}. \tag{23}\]
**Remark 3.1**: _For an NN trained for the binary classification problem described in the previous section, we have \(d_{L+1}=2\). In this setting, we define the NTK and CK for \(f_{\text{diff}}:=f_{\text{NN},1}^{\theta}-f_{\text{NN},2}^{\theta}\) since, as shown in (4), this difference dictates the output of the logistic predictor._
Denoting by \(\mathrm{E}\) the contribution towards the NTK from every layer but the last allows us to write
\[\mathrm{NTK}(\mathbf{z},\widetilde{\mathbf{z}})=\mathrm{CK}(\mathbf{z}, \widetilde{\mathbf{z}})+\mathrm{E}(\mathbf{z},\widetilde{\mathbf{z}}). \tag{24}\]
The CK can therefore be viewed as a "zeroth-order" approximation to the NTK. All these components are valid kernels that can be calculated from a given NN. However, the cost of computing the CK at a pair of points is the same as that of evaluating the NN at those points since, according to (23), it makes use of the neuron values at the last hidden layer. Moreover, numerical evidence suggests that, while the NTK expectedly outperforms the CK on test sets, the improvement is only marginal and is dwarfed by the superiority both kernels exhibit over the NN architecture from which they are derived (see Section 6). In other words, the CK appears to yield substantive gains in accuracy over a given finite width neural network while being easier to calculate than the NTK and much cheaper to train further than the NN. This raises the intriguing possibility that the CK can serve as a low-cost proxy for the NTK, i.e., lead to a comparable boost in performance over the NN, while being significantly less expensive to assemble than the NTK and considerably easier to train than the NN. Since training the CK is the same as further training the last parameter layer of the NN, our work provides a recipe for boosting NN accuracy and also sheds light on the robustness (or lack thereof) conferred on the approximations by the choice of the activation function.
In the following sections, we provide a mathematical treatment of this phenomenon for two types of problems: regression of smooth functions and classification using logistic regression. For ease of exposition and analysis, we concentrate on these problems in low dimensions with structured training and test nodes. The analyses of these problems are presented in Sections 4 and 5, and are complemented by numerical experiments presented in Section 6.
Function regression
### Preliminaries
We consider the approximation problem for a smooth function \(f:[a,b]\rightarrow\mathbb{R}\) whose values are known at the equispaced training nodes \(x_{j}=a+j\Delta x\) for \(0\leq j\leq N\), where \(\Delta x=(b-a)/N\). The performance is assessed on the equispaced test nodes \(y_{j}=a+j\Delta y\) for \(0\leq j\leq M\), where \(\Delta y=(b-a)/M\); we take \(M=\tau N\), for some integer \(\tau>1\), and set
\[w_{i}=\begin{cases}\Delta x,&1\leq i\leq N-1\\ \Delta x/2,&i=0,N\end{cases},\qquad v_{i}=\begin{cases}\Delta y,&1\leq i\leq M -1\\ \Delta y/2,&i=0,M\end{cases}, \tag{25}\]
so for any \(g:[a,b]\rightarrow\mathbb{R}\), we have
\[\left\|g\right\|_{0}=\left(\frac{\Delta x}{2}\sum_{j=0}^{N-1} \left(g(x_{j})^{2}+g(x_{j+1})^{2}\right)\right)^{1/2},\quad\left\|g\right\|_{1 }=\left(\frac{\Delta y}{2}\sum_{j=0}^{M-1}\left(g(y_{j})^{2}+g(y_{j+1})^{2} \right)\right)^{1/2}. \tag{26}\]
These can be recognized as the trapezoidal rule approximations to the \(L^{2}\) norm on \([a,b]\); recall that this approximation is second-order accurate with respect to the step-size in general and spectrally accurate for periodic functions. Moreover, these choices serve to considerably simplify the analysis: for instance, since \(y_{\tau j}=x_{j}\) for \(0\leq j\leq N\), we have, for any function \(g\),
\[\left\|g\right\|_{0}^{2}=\frac{\Delta x}{2}\sum_{j=0}^{N-1} \left(g(x_{j})^{2}+g(x_{j+1})^{2}\right) \leq \frac{\Delta x}{2}\sum_{j=0}^{N-1}\sum_{i=0}^{\tau}g(y_{\tau j+i} )^{2}=\left(\frac{M}{N}\right)\left\|g\right\|_{1}^{2}\]
so that
\[\left\|g\right\|_{0} \leq \tau^{1/2}\left\|g\right\|_{1}. \tag{27}\]
This straightforward result allows us to describe the size of the function on the lower-resolution training grid in terms of the higher-resolution test grid. The next two results enable us to move in the opposite direction, i.e., bounding the norm on the finer grid by the norm on the coarser grid. This naturally requires some assumptions about the behaviour of the function away from the training nodes; the following lemmas consider two different settings.
### Two Lemmas
**Lemma 4.1**: _Suppose that \(g\) is monotonic on every sub-interval \([x_{i},x_{i+1}]\). Then,_
\[\left\|g\right\|_{1}\leq\sqrt{2}\left\|g\right\|_{0}. \tag{28}\]
**Proof:** The monotonicity of \(g\) on every sub-interval implies
\[\left|g(y_{\tau i+k})\right|\leq\max\left\{\left|g(x_{i})\right|,\left|g(x_{i+1})\right|\right\},\quad 1\leq k\leq\tau-1,\ 0\leq i\leq N-1, \tag{29}\]
with the result
\[\left\|g\right\|_{1}^{2} = \frac{\Delta y}{2}\sum_{j=0}^{M-1}g(y_{j})^{2}+g(y_{j+1})^{2}\] \[= \frac{\Delta y}{2}\sum_{j=0}^{N-1}\left[g(x_{j})^{2}+g(x_{j+1})^{ 2}\right]+\frac{\Delta y}{2}\sum_{i=0}^{N-1}\sum_{k=1}^{\tau-1}g(y_{\tau i+k })^{2}\] \[\leq \frac{\Delta y}{2}\sum_{j=0}^{N-1}\left[g(x_{j})^{2}+g(x_{j+1})^ {2}\right]+(\tau-1)\frac{\Delta y}{2}\sum_{i=0}^{N-1}\max\left\{g(x_{i})^{2},g(x_{i+1})^{2}\right\}\] \[\leq \frac{1}{\tau}\left\|g\right\|_{0}^{2}+\frac{2(\tau-1)}{\tau} \left\|g\right\|_{0}^{2}\] \[\leq 2\left\|g\right\|_{0}^{2},\]
and hence \(\left\|g\right\|_{1}\leq\sqrt{2}\left\|g\right\|_{0}\).
Combining this with (27), we have the norm equivalence
\[\tau^{-1/2}\left\|g\right\|_{0}\leq\left\|g\right\|_{1}\leq\sqrt{2}\left\|g \right\|_{0}. \tag{30}\]
**Lemma 4.2**: _Suppose that \(g\) is Lipschitz on \([a,b]\) with Lipschitz constant \(L_{g}\). Then,_
\[\left\|g\right\|_{1}^{2}\leq 4\left(\left\|g\right\|_{0}^{2}+\frac{2(b-a)^{2}L_{g }^{2}}{N^{2}}\right). \tag{31}\]
**Proof:** The Lipschitz property implies that
\[\left|g(y_{\tau j+k})\right|\leq\max\left\{\left|g(x_{j})\right|+kL_{g}\Delta y,\left|g(x_{j+1})\right|+(\tau-k)L_{g}\Delta y\right\}\]
for \(0\leq k\leq\tau\). It follows that
\[g(y_{\tau j+k})^{2} \leq 2\left[g(x_{j})^{2}+k^{2}L_{g}^{2}(\Delta y)^{2}+g(x_{j+1})^{2} +(\tau-k)^{2}L_{g}^{2}(\Delta y)^{2}\right]\] \[\leq 2\left[g(x_{j})^{2}+g(x_{j+1})^{2}+2L_{g}^{2}(\Delta x)^{2} \right],\]
so for \(0\leq l\leq\tau-1\), we have
\[g(y_{\tau j+l})^{2}+g(y_{\tau j+l+1})^{2}\leq 4\left[g(x_{j})^{2}+g(x_{j+1})^{2 }+2L_{g}^{2}(\Delta x)^{2}\right].\]
As a result,
\[\sum_{l=0}^{\tau-1}g(y_{\tau j+l})^{2}+g(y_{\tau j+l+1})^{2} \leq 4\tau\left[g(x_{j})^{2}+g(x_{j+1})^{2}+2L_{g}^{2}(\Delta x)^{2}\right]\] \[\Rightarrow\sum_{j=0}^{N-1}\sum_{l=0}^{\tau-1}g(y_{\tau j+l})^{2} +g(y_{\tau j+l+1})^{2} \leq 4\tau\sum_{j=0}^{N-1}\left[g(x_{j})^{2}+g(x_{j+1})^{2}+2L_{g}^{2 }(\Delta x)^{2}\right],\]
and hence
\[M\left\|g\right\|_{1}^{2}\leq 4N\tau\left[\left\|g\right\|_{0}^{2}+2L_{g}^{2}( \Delta x)^{2}\right]\Rightarrow\left\|g\right\|_{1}^{2}\leq 4\left(\left\|g \right\|_{0}^{2}+\frac{2(b-a)^{2}L_{g}^{2}}{N^{2}}\right).\]
### Approximation properties
For a smooth target function \(f:[a,b]\rightarrow\mathbb{R}\), let \(f_{\text{CK}}\) and \(f_{\text{NTK}}\) be the CK and NTK approximations to \(f\) with respect to the training nodes. Recall that (8) can be viewed as an orthogonal projection \(\mathcal{P}_{\text{ker}}\) on \(\mathcal{S}_{\text{ker}}\), the span of the feature map components corresponding to the kernel. Since \(\mathcal{S}_{\text{CK}}\subset\mathcal{S}_{\text{NTK}}\), we have
\[\left\|f-f_{\text{NTK}}\right\|_{0}\leq\left\|f-f_{\text{CK}}\right\|_{0}. \tag{32}\]
Next, we establish that the error of the CK approximation may also be controlled by that of the NTK approximation for most target functions. More precisely, we define, for any \(\beta\leq 1\)
\[\mathcal{F}_{\beta}=\left\{f:[a,b]\rightarrow\mathbb{R}\text{ is smooth with }\left\|\mathcal{P}_{\text{NTK}}f\right\|_{0}\leq\beta\left\|f\right\|_{0}\right\}. \tag{33}\]
We note that since \(\mathcal{P}_{\text{NTK}}\) is an orthogonal projection, its norm is one and so \(\mathcal{F}_{1}\) contains all smooth real-valued functions. However, most functions of interest would not lie in \(\mathcal{S}_{\text{NTK}}\) and hence would belong to some \(\mathcal{F}_{\beta}\) with \(\beta<1\).
**Lemma 4.3**: _Let \(f\in\mathcal{F}_{\beta}\). Then,_
\[\left\|\mathcal{P}_{\text{NTK}}(f-f_{\text{CK}})\right\|_{0}\leq\beta\left\|f -f_{\text{CK}}\right\|_{0}. \tag{34}\]
**Proof:** We have
\[\left\|f\right\|_{0}^{2}=\left\|\mathcal{P}_{\rm NTK}f\right\|_{0}^{2}+\left\|(I- \mathcal{P}_{\rm NTK})f\right\|_{0}^{2}=\left\|f_{\rm NTK}\right\|_{0}^{2}+ \left\|f-f_{\rm NTK}\right\|_{0}^{2} \tag{35}\]
and
\[\left\|f-f_{\rm CK}\right\|_{0}^{2} = \left\|\mathcal{P}_{\rm NTK}(f-f_{\rm CK})\right\|_{0}^{2}+\left\| (I-\mathcal{P}_{\rm NTK})(f-f_{\rm CK})\right\|_{0}^{2} \tag{36}\] \[= \left\|f_{\rm NTK}-f_{\rm CK}\right\|_{0}^{2}+\left\|f-f_{\rm NTK} \right\|_{0}^{2}\]
since \((I-\mathcal{P}_{\rm NTK})f_{\rm CK}=0\) due to \(\mathcal{P}_{\rm NTK}\mathcal{P}_{\rm CK}=\mathcal{P}_{\rm CK}\). Combining (35) and (36) yields
\[\frac{\left\|f-f_{\rm CK}\right\|_{0}^{2}}{\left\|f\right\|_{0}^ {2}} = \frac{\left\|f_{\rm NTK}-f_{\rm CK}\right\|_{0}^{2}+\left\|f-f_{ \rm NTK}\right\|_{0}^{2}}{\left\|f_{\rm NTK}\right\|_{0}^{2}+\left\|f-f_{\rm NTK }\right\|_{0}^{2}}\] \[\geq \frac{\left\|f_{\rm NTK}-f_{\rm CK}\right\|_{0}^{2}}{\left\|f_{ \rm NTK}\right\|_{0}^{2}}\]
where we used the elementary inequality
\[\frac{\epsilon+\zeta}{\rho+\zeta}\geq\frac{\epsilon}{\rho}, \tag{37}\]
for \(0<\epsilon\leq\rho\) and \(\zeta\geq 0\). As a result,
\[\left\|f_{\rm NTK}-f_{\rm CK}\right\|_{0}^{2} \leq \left(\frac{\left\|f_{\rm NTK}\right\|_{0}^{2}}{\left\|f\right\|_ {0}^{2}}\right)\left\|f-f_{\rm CK}\right\|_{0}^{2}\] \[\leq \beta^{2}\left\|f-f_{\rm CK}\right\|_{0}^{2}\] \[\Leftrightarrow\left\|\mathcal{P}_{\rm NTK}(f-f_{\rm CK})\right\|_ {0} \leq \beta\left\|f-f_{\rm CK}\right\|_{0}\qquad(\because\mathcal{P}_{ \rm NTK}\mathcal{P}_{\rm CK}=\mathcal{P}_{\rm CK}).\]
**Lemma 4.4**: _For any function \(f\in\mathcal{F}_{\beta}\) with \(\beta<1\),_
\[\left\|f-f_{\rm CK}\right\|_{0} \leq \frac{1}{1-\beta}\left\|f-f_{\rm NTK}\right\|_{0}.\]
**Proof:** We have
\[\left\|f-f_{\rm CK}\right\|_{0} \leq \left\|f-f_{\rm NTK}\right\|_{0}+\left\|f_{\rm NTK}-f_{\rm CK} \right\|_{0} \tag{38}\] \[= \left\|f-f_{\rm NTK}\right\|_{0}+\left\|\mathcal{P}_{\rm NTK}(f-f _{\rm CK})\right\|_{0}\qquad(\because\mathcal{P}_{\rm NTK}\mathcal{P}_{\rm CK }=\mathcal{P}_{\rm CK})\] \[\leq \left\|f-f_{\rm NTK}\right\|_{0}+\beta\left\|f-f_{\rm CK}\right\|_ {0}\qquad(\because\text{Lemma \ref{lem:20}})\] \[\Rightarrow\left\|f-f_{\rm CK}\right\|_{0} \leq \frac{1}{1-\beta}\left\|f-f_{\rm NTK}\right\|_{0}.\]
For such functions, we have the following approximation results.
**Theorem 4.1**: _Let \(f\in\mathcal{F}_{\beta}\) for some \(\beta<1\) and suppose that both \((f-f_{\rm CK})\) and \((f-f_{\rm NTK})\) are monotonic on every sub-interval \([x_{i},x_{i+1}]\) for \(0\leq i\leq N-1\). Then, there exist constants \(C_{1},C_{2}\geq 1\) such that_
\[C_{1}^{-1}\left\|f-f_{\rm NTK}\right\|_{1}\leq\left\|f-f_{\rm CK}\right\|_{1} \leq C_{2}\left\|f-f_{\rm NTK}\right\|_{1}. \tag{39}\]
**Proof:** We have
\[\left\|f-f_{\rm NTK}\right\|_{1} \leq \sqrt{2}\left\|f-f_{\rm NTK}\right\|_{0}\qquad(\because\text{Lemma \ref{lem:20}}) \tag{40}\] \[\leq \sqrt{2}\left\|f-f_{\rm CK}\right\|_{0}\qquad(\because(\ref{lem:20} ))\] \[\leq \sqrt{2\tau}\left\|f-f_{\rm CK}\right\|_{1}\qquad(\because(\ref{ lem:20})).\]
Similarly,
\[\left\|f-f_{\text{CK}}\right\|_{1} \leq \sqrt{2}\left\|f-f_{\text{CK}}\right\|_{0}\qquad(\because\text{Lemma \ref{lem:CK})} \tag{41}\] \[\leq \frac{\sqrt{2}}{1-\beta}\left\|f-f_{\text{NTK}}\right\|_{0}\qquad( \because\text{Lemma \ref{lem:CK})}\] \[\leq \frac{\sqrt{2\tau}}{1-\beta}\left\|f-f_{\text{NTK}}\right\|_{1} \qquad(\because(27)).\]
Setting \(C_{1}=\sqrt{2\tau}\) and \(C_{2}=\frac{\sqrt{2\tau}}{1-\beta}\) and combining (40) and (41) yields the desired (39).
Note that Theorem 4.1 requires the residuals to be monotonic which may be too stringent. The following results rests on a more relaxed assumption.
**Theorem 4.2**: _Let \(f\in\mathcal{F}_{\beta}\) for some \(\beta<1\) and suppose that both \((f-f_{\text{CK}})\) and \((f-f_{\text{NTK}})\) are Lipschitz, with Lipschitz constants \(L_{\text{CK}}\) and \(L_{\text{NTK}}\). Then, there exist constants \(D_{1},D_{2}\geq 1\) such that_
\[\left\|f-f_{\text{NTK}}\right\|_{1}^{2}\leq D_{1}\left\|f-f_{\text{CK}} \right\|_{1}^{2}+\frac{8(b-a)^{2}L_{\text{NTK}}^{2}}{N^{2}} \tag{42}\]
_and_
\[\left\|f-f_{\text{CK}}\right\|_{1}^{2}\leq D_{2}\left\|f-f_{\text{NTK}} \right\|_{1}^{2}+\frac{8(b-a)^{2}L_{\text{CK}}^{2}}{N^{2}}. \tag{43}\]
**Proof:** We have
\[\left\|f-f_{\text{NTK}}\right\|_{1}^{2} \leq 4\left(\left\|f-f_{\text{NTK}}\right\|_{0}^{2}+\frac{2(b-a)^{2} L_{\text{NTK}}^{2}}{N^{2}}\right)\qquad(\because\text{Lemma \ref{lem:CK})} \tag{44}\] \[\leq 4\left(\left\|f-f_{\text{CK}}\right\|_{0}^{2}+\frac{2(b-a)^{2} L_{\text{NTK}}^{2}}{N^{2}}\right)\qquad(\because(32))\] \[\leq 4\left(\tau\left\|f-f_{\text{CK}}\right\|_{1}^{2}+\frac{2(b-a)^{ 2}L_{\text{NTK}}^{2}}{N^{2}}\right)\qquad(\because(27)).\]
In addition,
\[\left\|f-f_{\text{CK}}\right\|_{1}^{2} \leq 4\left(\left\|f-f_{\text{CK}}\right\|_{0}^{2}+\frac{2(b-a)^{2} L_{\text{CK}}^{2}}{N^{2}}\right)\qquad(\because\text{Lemma \ref{lem:CK})} \tag{45}\] \[\leq 4\left(\frac{1}{(1-\beta)^{2}}\left\|f-f_{\text{NTK}}\right\|_{0} ^{2}+\frac{2(b-a)^{2}L_{\text{CK}}^{2}}{N^{2}}\right)\qquad(\because\text{Lemma \ref{lem:CK})}\] \[\leq 4\left(\frac{\tau}{(1-\beta)^{2}}\left\|f-f_{\text{NTK}}\right\|_ {1}^{2}+\frac{2(b-a)^{2}L_{\text{CK}}^{2}}{N^{2}}\right)\qquad(\because(27)).\]
Setting \(D_{1}=4\tau\) and \(D_{2}=\frac{4\tau}{(1-\beta)^{2}}\) yields the desired (42) and (43).
## 5 Logistic regression
### Preliminaries
In this section, we compare the performance of the two kernels under consideration for logistic regression. For brevity of exposition, we consider the problem in two dimensions. Let \(\{\mathbf{x}_{i,j}\}_{0\leq i\leq N_{1},0\leq j\leq N_{2}}\subset Z=[a_{1},b_{1}] \times[a_{2},b_{2}]\) be the training points given by
\[\mathbf{x}_{i,j}=(a_{1}+i\Delta x_{1},a_{2}+j\Delta x_{2}),\]
where \(\Delta x_{1}=(b_{1}-a_{1})/N_{1}\) and \(\Delta x_{2}=(b_{2}-a_{2})/N_{2}\). We further set \(h=\left(\Delta x_{1}^{2}+\Delta x_{2}^{2}\right)^{1/2}\). The test nodes \(\{\mathbf{y}_{i,j}\}_{0\leq i\leq M_{1},0\leq j\leq M_{2}}\) are similarly chosen as
\[\mathbf{y}_{i,j}=(a_{1}+i\Delta y_{1},a_{2}+j\Delta y_{2}),\]
where \(\Delta y_{1}=(b_{1}-a_{1})/M_{1}\) and \(\Delta y_{2}=(b_{2}-a_{2})/M_{2}\). We also assume that \(\tau_{k}=M_{k}/N_{k}\) is an integer greater than one for \(k=1,2\).
Following Section 2, each training and test point is assigned a class labelled \(0\) or \(1\), and denoted respectively by \(\chi_{ij}\) for every \(0\leq i\leq N_{1},0\leq j\leq N_{2}\) and \(\mu_{ij}\) for each \(0\leq i\leq M_{1},0\leq j\leq M_{2}\). Given a kernel function \(\ker:Z\times Z\to\mathbb{R}\), we can define the kernel matrix
\[H_{\ker}=(\ker(\mathbf{x}_{i,j},\mathbf{x}_{k,l}))_{0\leq i,k\leq N_{1},0\leq j,l\leq N _{2}}\in\mathbb{R}^{N_{1}N_{2}\times N_{1}N_{2}},\]
with the indices arranged suitably.
We note first that the NTK must necessarily outperform the CK on the training points, i.e.,
\[\ell_{0,\mathrm{NTK}}\left(\mathbf{\alpha}^{\star,\mathrm{NTK}}\right)\leq\ell_{ 0,\mathrm{CK}}\left(\mathbf{\alpha}^{\star,\mathrm{CK}}\right). \tag{46}\]
This is most easily seen by rewriting (46) in terms of the feature maps: since
\[\ell_{0,\mathrm{CK}}\left(\mathbf{\alpha}^{\star,\mathrm{CK}}\right)=\widetilde{ \ell}_{0,\mathrm{CK}}\left(\mathbf{\beta}^{\star,\mathrm{CK}}\right),\]
from (18), and \(\Phi_{\mathrm{NTK}}=\left(\Phi_{\mathrm{CK}}\ \ \Phi_{\mathrm{E}}\right)^{\top}\), setting \(\mathbf{\gamma}=\left(\mathbf{\beta}^{\star,\mathrm{CK}}\ \ \mathbf{0}\right)^{\top}\in \mathbb{R}^{|\theta|}\) yields
\[\ell_{0,\mathrm{CK}}\left(\mathbf{\alpha}^{\star,\mathrm{CK}}\right)=\widetilde{ \ell}_{0,\mathrm{CK}}\left(\mathbf{\beta}^{\star,\mathrm{CK}}\right)=\widetilde{ \ell}_{0,\mathrm{NTK}}\left(\mathbf{\gamma}\right)\geq\widetilde{\ell}_{0,\mathrm{ NTK}}\left(\mathbf{\beta}^{\star,\mathrm{NTK}}\right)=\ell_{0,\mathrm{NTK}}\left(\mathbf{ \alpha}^{\star,\mathrm{NTK}}\right). \tag{47}\]
In order to show that an equivalence result in the manner of Theorem 4.1 holds for the test errors, we follow a similar strategy to that for the function regression problem in Section 4:
1. identify two cases that permit a transfer of information from training to test nodes;
2. extend (46) to an equivalence result on the training nodes for the two kernels
These ingredients can then be combined to yield the desired conclusion.
### Another pair of lemmas
We begin by noting that
\[\ell_{0,\ker}(\mathbf{\alpha}^{\star,\ker})\leq\ell_{1,\ker}(\mathbf{\alpha}^{\star, \ker}) \tag{48}\]
simply by virtue of the fact that every training point is also a test point. To complete (a), we need to bound the error on the finer grid by that on the coarser grid. The quantity of interest is the linear combination of the feature map components
\[\psi_{\ker}(\mathbf{z})=\sum_{m=1}^{|\theta|}\Phi_{\ker,m}(\mathbf{z})\mathbf{\beta}_{m}^{ \star,\ker} \tag{49}\]
as it determines the size of the gap between \(\mathbf{z}\in Z\) and the separating hyperplane in the feature space. However, we also need to keep track of the class label that should be associated with \(\mathbf{z}\) as that determines how it shows up in the loss function (17). Let \(\eta:Z\to\{0,1\}\) be the true class label for every point (so that, e.g., \(\chi_{ij}=\eta(\mathbf{x}_{i,j})\) and \(\mu_{ij}=\eta(\mathbf{y}_{i,j})\)), and set
\[\widehat{\psi}_{\ker}(\mathbf{z})=\left(1-2\eta(\mathbf{z})\right)\psi_{\ker}(\mathbf{z}). \tag{50}\]
Using this function, we can write the minimized training and test losses as simply
\[\widetilde{\ell}_{0,\ker}=\sum_{i=0}^{N_{1}}\sum_{j=0}^{N_{2}}\ln \left[1+\exp\left(\widehat{\psi}_{\ker}(\mathbf{x}_{i,j})\right)\right] \tag{51}\]
and
\[\widetilde{\ell}_{1,\ker}=\sum_{i=0}^{M_{1}}\sum_{j=0}^{M_{2}}\ln \left[1+\exp\left(\widehat{\psi}_{\ker}(\mathbf{y}_{i,j})\right)\right], \tag{52}\]
where we have suppressed the dependence on \(\mathbf{\beta}^{*,\ker}\) for clarity of exposition. For any \(0\leq i\leq N_{1}-1\) and \(0\leq j\leq N_{2}-1\), we also define
\[\Lambda_{ij}=\{\mathbf{x}_{i,j},\mathbf{x}_{i+1,j},\mathbf{x}_{i,j+1},\mathbf{x}_ {i+1,j+1}\}\]
to make it easier to navigate the training grid.
The two regimes that interest us are a corner maximum condition and Lipschitz continuity. We make these precise in the following lemmas.
**Lemma 5.1**: _Suppose that on every rectangle with corners \(\Lambda_{ij}\), the function \(\widehat{\psi}_{\ker}\) achieves its maximum value on \(\Lambda_{ij}\). Then,_
\[\widetilde{\ell}_{1,\ker}\leq\tau_{1}\tau_{2}\widetilde{\ell}_{ 0,\ker}, \tag{53}\]
**Proof:** Every test point \(\mathbf{y}_{k,l}\) belongs to some rectangle with corners \(\Lambda_{k^{\prime}l^{\prime}}\). The contribution from this point to the test loss is then dominated by the contributions from the corner points, i.e.,
\[\ln\left[1+\exp\left(\widehat{\psi}_{\ker}(\mathbf{y}_{k,l})\right)\right] \leq \max_{k^{\prime}\leq i\leq k^{\prime}+1,l^{\prime}\leq j\leq l^{ \prime}+1}\ln\left[1+\exp\left(\widehat{\psi}_{\ker}(\mathbf{x}_{i,j})\right) \right], \tag{54}\]
with the result that
\[\sum_{k=0}^{M_{1}}\sum_{l=0}^{M_{2}}\ln\left[1+\exp\left( \widehat{\psi}_{\ker}(\mathbf{y}_{k,l})\right)\right] \leq \left(\frac{M_{1}}{N_{1}}\right)\left(\frac{M_{2}}{N_{2}}\right) \sum_{i=0}^{N_{1}}\sum_{j=0}^{N_{2}}\ln\left[1+\exp\left(\widehat{\psi}_{\ker }(\mathbf{x}_{i,j})\right)\right],\]
and hence
\[\widetilde{\ell}_{1,\ker} \leq \tau_{1}\tau_{2}\widetilde{\ell}_{0,\ker}.\]
We say that the labelling function \(\eta\) possesses the _matching property_ with respect to the given training and test nodes if every test point \(\mathbf{y}_{k,l}\) is contained in an enclosing rectangle with corners \(\Lambda_{k^{\prime}l^{\prime}}\) such that \(\eta\) agrees with \(\eta(\mathbf{y}_{k,l})\) on at least one corner point. In other words, we do not have the matching property if and only if there exists some test point \(\mathbf{y}_{k,l}\) such that, for any rectangle with corners \(\Lambda_{k^{\prime}l^{\prime}}\) that may enclose it, we have \(\eta(\mathbf{y}_{k,l})\notin\eta\left(\Lambda_{k^{\prime}l^{\prime}}\right)\).
**Lemma 5.2**: _Suppose that \(\psi_{ker}\) is Lipschitz continuous on \(D\), with Lipschitz constant \(L_{ker}\), and \(\eta\) possesses the matching property. Then,_
\[\widetilde{\ell}_{1,\ker} \leq \exp(hL_{ker})\tau_{1}\tau_{2}\widetilde{\ell}_{0,\ker}. \tag{55}\]
**Proof:** For any test point \(\mathbf{y}_{k,l}\), let \(\Lambda_{k^{\prime}l^{\prime}}\) contain the corners of the enclosing rectangle that respect the matching property. Without loss of generality, we can assume that \(\eta(\mathbf{y}_{k,l})=\eta(\mathbf{x}_{k^{\prime},l^{\prime}})=0\). We have
\[\psi_{\ker}\left(\mathbf{y}_{k,l}\right)\leq\psi_{\ker}\left(\mathbf{x}_ {k^{\prime},l^{\prime}}\right)+hL_{\ker},\]
and hence
\[\ln\left[1+\exp\left(\psi_{\rm ker}(\mathbf{y}_{k,l}) \right)\right] \leq \ln\left[1+\exp\left(\psi_{\rm ker}\left(\mathbf{x}_{k^{ \prime},l^{\prime}}\right)\right)\exp\left(hL_{\rm ker}\right)\right]\] \[\leq \exp\left(hL_{\rm ker}\right)\ln\left[1+\exp\left(\psi_{\rm ker} \left(\mathbf{x}_{k^{\prime},l^{\prime}}\right)\right)\right],\]
where we used the fact that
\[\ln\left(1+\rho x\right)\leq\rho\ln(1+x) \tag{56}\]
whenever \(\rho\geq 1\) and \(x\geq 0\). In the case \(\eta(\mathbf{y}_{k,l})=\eta(\mathbf{x}_{k^{\prime},l^{ \prime}})=1\), we similarly obtain
\[\ln\left[1+\exp\left(-\psi_{\rm ker}(\mathbf{y}_{k,l}) \right)\right] \leq \exp\left(hL_{\rm ker}\right)\ln\left[1+\exp\left(-\psi_{\rm ker} \left(\mathbf{x}_{k^{\prime},l^{\prime}}\right)\right)\right],\]
and, consequently,
\[\sum_{k=0}^{M_{1}}\sum_{l=0}^{M_{2}}\ln\left[1+\exp\left(\widehat {\psi}_{\rm ker}(\mathbf{y}_{k,l})\right)\right] \leq \exp\left(hL_{\rm ker}\right)\left(\frac{M_{1}}{N_{1}}\right) \left(\frac{M_{2}}{N_{2}}\right)\sum_{i=0}^{N_{1}}\sum_{j=0}^{N_{2}}\ln\left[1 +\exp\left(\widehat{\psi}_{\rm ker}(\mathbf{x}_{i,j})\right)\right]\]
so that
\[\widetilde{\ell}_{1,{\rm ker}} \leq \exp\left(hL_{\rm ker}\right)\tau_{1}\tau_{2}\widetilde{\ell}_{0,{\rm ker}}.\]
In combination with (48), Lemmas (5.1) and (5.2) allow us to move seamlessly between training and test losses. Note that the assumptions underlying these lemmas can only conceivably be satisfied by smooth \(\psi_{\rm ker}\), thus almost completely ruling out the suitability of ReLU activations. On the other hand, the Tanh activation function equips the kernels with Lipschitz continuity by default; the matching property is a technical assumption to rule out pathological cases where the label of a test point disagrees with those of all of its neighbouring training points.
### Performance comparison
Next, we address task (b) outlined at the end of Subsection 5.1. Note that for any \(0\leq i\leq N_{1}\), \(0\leq j\leq N_{2}\), we have
\[\ln\left[1+\exp\left(\widehat{\psi}_{\rm CK}\left(\mbox{\boldmath $x$}_{i,j}\right)\right)\right] = \ln\left[1+\exp\left(\widehat{\psi}_{\rm CK}\left(\mbox{\boldmath $x$}_{i,j}\right)-\widehat{\psi}_{\rm NTK}\left(\mathbf{x}_{i,j} \right)\right)\exp\left(\widehat{\psi}_{\rm NTK}\left(\mathbf{x}_{i,j }\right)\right)\right]\] \[\leq \max\left\{1,\exp\left(\widehat{\psi}_{\rm CK}\left(\mathbf{x}_{i,j}\right)-\widehat{\psi}_{\rm NTK}\left(\mathbf{x}_{i,j }\right)\right)\right\}\ln\left[1+\exp\left(\widehat{\psi}_{\rm NTK}\left( \mathbf{x}_{i,j}\right)\right)\right]\]
where we used (37) again. Set
\[\omega = \max_{0\leq i\leq N_{1},0\leq j\leq N_{2}}\exp\left(\widehat{\psi }_{\rm CK}\left(\mathbf{x}_{i,j}\right)-\widehat{\psi}_{\rm NTK} \left(\mathbf{x}_{i,j}\right)\right); \tag{57}\]
it follows from (46) that \(\omega\geq 1\). We can therefore write
\[\widetilde{\ell}_{0,{\rm CK}}\leq\omega\widetilde{\ell}_{0,{\rm NTK}}. \tag{58}\]
Combining (46) and (58) with (48) and Lemmas 5.1 and 5.2 then yields the following two results.
**Theorem 5.1**: _Suppose that on every rectangle with corners \(\Lambda_{ij}\), the functions \(\widehat{\psi}_{\rm NTK}\) and \(\widehat{\psi}_{\rm CK}\) achieve their maximum values on \(\Lambda_{ij}\). Then, there exist constants \(C_{1},C_{2}\geq 1\) such that_
\[C_{1}^{-1}\ell_{1,{\rm NTK}}\leq\ell_{1,{\rm CK}}\leq C_{2}\ell_{1,{\rm NTK}} \tag{59}\]
**Proof:** We have
\[\ell_{1,{\rm NTK}} \leq (\tau_{1}\tau_{2})\ell_{0,{\rm NTK}}\qquad(\because\mbox{Lemma \ref{lem:NTK}}) \tag{60}\] \[\leq (\tau_{1}\tau_{2})\ell_{0,{\rm CK}}\qquad(\because(46))\] \[\leq (\tau_{1}\tau_{2})\ell_{1,{\rm CK}}\qquad(\because(48)).\]
Similarly,
\[\ell_{1,{\rm CK}} \leq (\tau_{1}\tau_{2})\ell_{0,{\rm CK}}\qquad(\because\mbox{Lemma \ref{lem:NTK}}) \tag{61}\] \[\leq (\tau_{1}\tau_{2}\omega)\ell_{0,{\rm NTK}}\qquad(\because(58))\] \[\leq (\tau_{1}\tau_{2}\omega)\ell_{1,{\rm NTK}}\qquad(\because(48)).\]
Setting \(C_{1}=\tau_{1}\tau_{2}\) and \(C_{2}=\tau_{1}\tau_{2}\omega\) and combining (60) and (61) yields (59).
**Theorem 5.2**: _Suppose that both \(\psi_{\rm NTK}\) and \(\psi_{\rm CK}\) are Lipschitz continuous on \(D\) and \(\eta\) possesses the matching property. Then, there exist constants \(D_{1},D_{2}\geq 1\) such that_
\[D_{1}^{-1}\ell_{1,{\rm NTK}}\leq\ell_{1,{\rm CK}}\leq D_{2}\ell_{1,{\rm NTK}} \tag{62}\]
**Proof:** As before, let \(L_{\rm ker}\) be the Lipschitz constant of \(\psi_{\rm ker}\) for both the kernels. We then have
\[\ell_{1,{\rm NTK}} \leq e^{hL_{\rm NTK}}\ell_{0,{\rm NTK}}\qquad(\because\mbox{Lemma \ref{lem:NTK}}) \tag{63}\] \[\leq e^{hL_{\rm NTK}}\ell_{0,{\rm CK}}\qquad(\because(46))\] \[\leq e^{hL_{\rm NTK}}\ell_{1,{\rm CK}}\qquad(\because(48)).\]
Similarly,
\[\ell_{1,{\rm CK}} \leq e^{hL_{\rm CK}}\ell_{0,{\rm CK}}\qquad(\because\mbox{Lemma \ref{lem:NTK}}) \tag{64}\] \[\leq e^{hL_{\rm CK}}\ell_{0,{\rm NTK}}\qquad(\because(58))\] \[\leq \omega e^{hL_{\rm CK}}\ell_{1,{\rm NTK}}\qquad(\because(48)).\]
Setting \(D_{1}=e^{hL_{\rm NTK}}\) and \(D_{2}=\omega e^{hL_{\rm CK}}\) and combining (63) and (64) yields (62).
## 6 Numerical tests
In this section, we present numerical evidence to support the theoretical findings established in the Sections 4 and 5. In addition, we identify myriad benefits of employing CK approximations over those coming from NN and NTK, including better conditioning, inexpensive gains in accuracy, and improved robustness.
We first consider the approximation problem for smooth functions \(f:[-1,1]\to\mathbb{R}\). As examples, we use
\[f_{1}(x)=e^{\sin(2\pi x)},\qquad f_{2}(x)=e^{3x},\qquad f_{3}(x)=\cos\left(e^{ 3x}\right). \tag{65}\]
We note that \(f_{3}\) oscillates rapidly close to \(x=1\) so it poses a particularly stiff challenge to NN and kernel approximators alike.
The sizes of the training and testing grids, as defined in Subsection 4.1, are fixed at \(N=200\) and \(M=600\) respectively, so \(\tau=3\). We train NNs of the form (1) with \(L=3\) and \(d_{l}=d=128\) for \(1\leq l\leq 3\) for \(2400\) epochs using the loss function defined in (2) and the weights given in (25). We employ the ADAM optimizer with the learning rate set at \(10^{-3}\) and primarily use Tanh as the activation function.
After training an NN, we assemble the CK and NTK and compute the corresponding kernel approximations (8). The details of the algorithm used for extracting the NTK are given in Appendix A. The CK is easily obtained from the values of the last hidden layer and using (23). Since the feature space dimension for the CK is \((d+1)\), assembling and using the corresponding Jacobian is also inexpensive. The results from using the Jacobian are indicated by CKJ in the following plots; a recipe for using this approximation is detailed in Appendix B.
Figure 1: The results of function regression for 100 different NNs, trained for 2400 epochs, and the corresponding approximations using the NTK, CK, and CKJ extracted from the NN at the end of the training.
Figure 2: The test errors for NNs over the course of being trained to approximate the given functions, and for the corresponding NTK, CK, and CKJ approximations. The errors are averaged over ten iterations to reduce the effects of random initialization.
In Figure 1, we show the test errors for the example target functions for 100 runs of the procedure described above. Owing to randomness in the initialization, the results vary over iterations; nevertheless, a clear separation is evident between the NN test errors and those of the two kernel approximations. More importantly, we note that the latter pair are fairly similar so that, while one may appear to be superior over the other for a particular example (e.g., the NTK for \(f_{1}\)), the difference is only marginal compared to the supremacy both enjoy over the NN.
Figure 1 also shows that the CK Jacobian yields test errors that improve on the kernel approximations by several orders of magnitude. This is a consequence of the much better conditioning possessed by the former: assembling the kernel matrix \(\widehat{H}_{\text{ker}}\) squares the singular values and applying its pseudo-inverse in (8) inflates round-off errors more than the using the Jacobian does, with the result that the test errors plateau earlier. We emphasize that the use of the Jacobian is only afforded by the CK since the dimension of the feature space corresponding to the NTK is prohibitively large.
These claims are further supported by the evolution of the errors over the course of the training, shown in Figure 2. The test errors are computed by averaging over ten iterations to reduce the effect of random initialization. While the NN errors can be seen to decay gradually, the kernel approximations do so more rapidly, particularly over the first few hundred epochs. The close resemblance between the two error profiles is noteworthy and supports our contention that the CK can serve as a proxy for the NTK. Finally, we note that, as expected, the CKJ errors decay even faster and level off at a threshold several orders of magnitude lower.
In Figure 3, we show that these conclusions are also valid when we repeat the experiments with the width of the hidden layers in the NN set to \(d=256\). In particular, this assuages any concerns that the beneficial properties enjoyed by the CK and CKJ approximations may be a consequence of solving an overdetermined least-squares problem: with the larger width, this is no longer the case, and the similarity in the corresponding
Figure 3: Function regression results for 100 NNs with the widths of the hidden layers set to 256, and the corresponding NTK, CK, and CKJ approximations.
Figure 4: The test errors for function regression for 100 trained NNs using the ReLU activation function, and the corresponding NTK and CK approximations.
Figure 5: Averaged test errors for function regression over the course of training ten NNs using ReLU activations, and the corresponding NTK and CK approximations.
error plots in Figures 1 and 3 establishes that our assertions are independent of NN width.
The same, however, cannot be said about the activation functions. Figure 4 shows that using ReLU activations instead of Tanh leads to the erasure of the error segregation seen in the earlier diagrams. In addition, the NTK approximators paradoxically appear to perform the worst, with the CK results only slightly bettering the NN ones. Figure 5 highlights that this is generally the case over the course of the NN training as well. These observations can be explained as a combined effect of the over-parameterization provided by the NTK and \(\mathcal{S}_{\text{NTK}}\) containing discontinuous functions (due to the derivatives of the ReLU activations; see Appendix A, and (78) in particular, for more details). The resulting approximator manages to fit the training data very well but struggles with generalizing to the test data-points. In contrast, the \(\mathcal{S}_{\text{CK}}\) consists only of continuous (albeit non-differentiable) functions, with the result that, while the approximation accuracy is lower than we observed with Tanh, it does not suffer from over-fitting. This can be regarded as indicative of the dangers associated with over-parameterization, the care that must be taken with ReLU-based approximators on unseen data, and the robustness possessed by the CK.
Next, we present the results for binary logistic regression on \(Z=[-1,1]^{2}\). We set \(N_{1}=11\), \(N_{2}=7\), \(M_{1}=22\) and \(M_{2}=21\) (so \(\tau_{1}=2\) and \(\tau_{2}=3\)) to specify the uniform training and testing grids (see Subsection 5.1 for the details). The labels are generated with respect to a known separating boundary \(F(x_{1},x_{2})=0\) for some \(F:Z\rightarrow\mathbb{R}\), so the labelling function is \(\eta(x_{1},x_{2})=\left(\text{sign}\left(F(x_{1},x_{2})\right)+1\right)/2\); we consider two examples:
\[F_{1}(x_{1},x_{2})=4x_{1}^{2}-3x_{1}+5x_{2}-1,\quad F_{2}(x_{1},x_{2})=2x_{1}^{ 3}-0.6x_{1}^{2}-1.94x_{1}+x_{2}+0.2. \tag{66}\]
We train NNs of the form (1), with \(L=2\) and \(d_{l}=d=128\) for \(l=1,2\), until the training accuracy (i.e., the proportion of correctly identified points) crosses \(0.85\), or \(4000\) e
Figure 6: The test cross-entropies and accuracies for logistic regression performed with 100 NNs, and the corresponding NTK and CK results.
Figure 7: The cross-entropies and accuracies for the test dataset for logistic regression performed with 100 NNs with hidden layer widths set to 64, and the corresponding NTK and CK results.
Figure 8: Test results for 100 NNs using the ReLU activation function for logistic regression, and the corresponding NTK and CK metrics.
Figure 9: Evolution of the test metrics, averaged over ten iterations, of NNs used to solve the logistic regression problem, and the corresponding NTK and CK results.
employ the ADAM optimizer, with learning rate \(10^{-5}\), and use Tanh as the primary activation function. The NTK and CK are extracted as detailed before and the respective training cross-entropies, defined in (9), are minimized by using Newton's method to solve \(\nabla_{\boldsymbol{\alpha}}\ell_{0,\text{lev}}(\boldsymbol{\alpha})=\boldsymbol {0}\). We consider both cross-entropy loss values and classification accuracies as assessment metrics on the test points.
The plots in Figure 6 show the results of 100 iterations of this procedure. A clear partition is visible in the performance metrics of the NN and the two kernels on the test datasets. At the same time, while the CK is generally outperformed by the NTK, the degree of improvement is dwarfed by the separation from the NN results.
Figure 7 shows that this is still the case, even when the hidden widths of the NN are halved to 64, thus demonstrating that the CK performance is not simply a consequence of over-parameterization. However, Figure 8 shows that the clear gap in performance vanishes when the activation function employed is ReLU. As in the case of function regression, this is explainable by over-fitting as a result of the volatile mix of over-parameterization and discontinuous feature map components.
Finally, Figure 9 shows the evolution of the performance metrics over the course of the NN training. We again show the average of ten iterations to reduce the effect of random initialization. The separation alluded to earlier is evident throughout, as is the relative proximity of the loss and accuracy values of the two kernels. Moreover, we note that the gaps shrink with training, so much so that the CK metrics are barely distinguishable from the NTK ones; the NN performance improves but a sizeable gulf nevertheless persists. In particular, this highlights a crucial point: even when the two kernels have been extracted from a minimally trained NN, they yield better results than the NN does at the end of training and when its metrics have plateaued. This underscores that using the CK (i.e., explicitly retraining only the last layer of the NN) is a low-cost recipe for significantly improving the accuracy of an NN.
## 7 Application to Foundation Models
In this section, we demonstrate the effectiveness of our prescribed training strategy on a foundation model. Specifically, we make use of the pre-trained GPT-2 [26] for sentiment classification on a human-annotated subset of the Sentiment140 dataset consisting of 359 tweets, all of which are labelled either positive or negative based on the expressed sentiment [27], rather than the entire Sentiment140 dataset due to a distribution shift between the train and test split as reported in [28] and seen in Figure 10. We evaluate two approaches:
* We fine-tune by starting from pre-trained weights, attaching a randomly initialized classification head and train for ten epochs, and allowing _all_ the weights to update.
* We perform linear probing by extracting a feature vector at the final hidden layer and training a logistic regression classifier for ten epochs.
Figure 10: Kernel density estimate of the largest principal component of GPT-2’s pre-trained final embedding representation of Sentiment140 \(P\) (the processed training set) and Sentiment140 \(M\) (the manually curated test set).
We note that these procedures can be seen as analogues of the NN and CK training respectively studied in the earlier sections. In particular, both NN and FT entail updates to all the parameters in the architecture, while the CK and LP approaches employ the feature map from the last hidden layer of a partially-trained architecture for logistic regression. Consequently, while the latter approach is significantly less expensive than the former, it can, as shown in the earlier sections, yield a marked improvement in accuracy.
We use pre-trained weights from Hugging Face for the transformer layers [29]. Furthermore, to establish a baseline, we also perform linear probing with GPT-2 initialized with random weights drawn from the truncated normal distribution \(\mathcal{N}(0,0.02)\)[26], [26]. All experiments are performed 50 times with the training and testing data shuffled. The averaged results, along with the standard errors, are reported in Table 1 with further details about the hyperparameters provided in Appendix C, Table 2.
We first note that the clear superiority (on both training and testing metrics) of linear probing following pre-training over random feature highlights the significant refinement the feature map undergoes after being exposed to a training corpus. Next, the extremely high training accuracy from fine-tuning suggests that the architecture is overfitting and hence prone to poor generalization. This is supported by the considerably lower test accuracy. On the other hand, linear probing performs better on the test dataset, and the significantly smaller disparity between its training and testing accuracies is indicative of its robustness against overfitting. Finally, we applied QLoRA [31] on GPT-2 to efficiently fine-tune the model on \(M.\) The trade-off for using quantized, low rank adapters is apparent compared to fine-tuning GPT-2 and the linear probing approach.
## 8 Discussion
Despite possessing the universal approximation property in theory [32, 33] and enjoying the expressiveness that accompanies increased depth [8], in practice neural networks are stymied by optimization errors in their efforts to approximate given functions to arbitrary accuracy. Finding the optimal parameters requires the navigation of highly complex and non-convex loss landscapes and demands a large number of training epochs, thus driving up the training cost. Furthermore, the somewhat cumbersome architectures do not admit closed-form characterizations of the learned functions, making it harder to determine their generalization properties, e.g., assess them on unseen data or predict their stability under the addition of seemingly negligible noise.
Since the NTK controls the evolution during training of the learned function, it provides one way of supplying the missing characterization. In particular, an optimally trained NN, in the infinite width limit, can be shown to equal a kernel machine with a closed form known _a priori_. From (21), it follows that, for finite width networks, a kernel machine relying on the empirical NTK would necessarily outperform the NN on the training data. The numerical tests in Section 6 confirm that this is also the case on test datasets across different problems, network widths, and training stages, provided the Tanh activation function is used. However, evaluating the resulting approximations (e.g., (8) and (10)) at a new point requires computing the NTK by pairing this point with every training point, which is a highly expensive undertaking for networks used in practical applications. Worse still, the NTK performance may be inferior to that of the NN when ReLU activations are employed (e.g., Figures 4 and 8) due to discontinuities in its feature map components. Furthermore, the resulting sensitivity to small perturbations in the inputs may also leave these approximations singularly susceptible to adversarial attacks.
In this paper, we studied the properties of approximations relying on the CK and found, both theoretically and empirically, that their performance is closely tied to the NTK results, provided certain regularity conditions on the approximations are met. These are satisfied by default for the Tanh activation function,
\begin{table}
\begin{tabular}{l l l} \hline \hline Model & Training Accuracy & Test Accuracy \\ \hline Random feature GPT-2 + LP & \(85.7\pm 0.27\%\) & \(57.8\pm 1.05\%\) \\ Pre-trained GPT-2 + LP & \(95.3\pm 0.15\%\) & \(86.3\pm 0.55\%\) \\ Pre-trained GPT-2 + FT & \(99.9\pm 0.02\%\) & \(83.7\pm 0.81\%\) \\ Pre-trained GPT-2 + QLoRA + FT & \(70.0\pm 0.28\%\) & \(66.9\pm 0.77\%\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Accuracy results for training and testing the LP and FT approaches on a human-curated subset of the Sentiment140 dataset. Mean values \(\pm\) the standard error of the mean are shown.
thus establishing that the gains in accuracy for the NTK over the NN are also inherited by the CK approximations. Importantly, the training cost of a CK machine is significantly lower than for an NN, yielding a closed form formula for function regression (8) and a convex optimization problem for logistic regression (9). In addition, its evaluation on new data-points is as inexpensive as evaluating the NN on the same points. As a result, the CK achieves a boost in accuracy over the NN comparable to that of the NTK while being strikingly cheaper to use than the latter and much easier to train than the former.
The CK also possesses features that, in some settings, make it even more favourable than the NTK. Due to the relatively low dimension of its feature space, the usage of its feature map form (12) for function regression is computationally viable. This approximation is much better conditioned than the corresponding kernel form, with the result that round-off errors are not magnified excessively and the approximation errors can achieve a much lower plateau, as shown in Figures 1, 2 and 3. In addition, the components of the feature map corresponding to the CK are continuous, even when using ReLU activations. As a consequence, the resulting approximations are less prone to overfitting and more robust to noise in the inputs.
We note that training a CK machine is equivalent to finding the optimal weights to attach to the neurons in the last hidden layer. Our work then suggests that a low-cost strategy for improving the accuracy of a trained NN is to simply retrain the last layer of parameters. This recipe does not alter the form of the neurons (i.e., the feature map components for the CK) but, as we have shown, can lead to significant enhancements in approximation accuracy. The regularity of the activation function is a key determinant in the success of this strategy. Observe that the choice of the activation function does not appear to affect the NN approximation errors too much, but the kernel performance is closely tied to it (e.g., compare Figures 1 and 4, and Figures 6 and 8). This indicates that the effects of the regularity of the activation function are realized only on the kernel machines, and that the NN is largely agnostic to it.
Deploying our strategy on a foundation model for classification illustrates that freezing the feature map and training only the last layer of parameters yields performance that compares favourably with full architecture training. In addition, this approach might even lend improved robustness to the predictive capabilities in that the limited degrees of freedom resist overfitting on the training data. Our results therefore hint at the possibility that early stopping for model fitting might be beneficial in more ways than is currently surmised and that there might be subtle procedures to modulate stopping times across various parts of the network. Our recommended approach offers a computational speed up without the large trade-off in accuracy as seen in other efficient fine-tuning methods.
Finally, our results can also offer an explanation as to why feed-forward regression networks perform so well. In essence, the information captured in the last hidden layer's feature representation uniquely determines the CK. While lossy, the CK still preserves enough information from the global feature representation encoded in the NTK. And so by construction, these neuro-architectures learn and then distill the relevant information by the final hidden layer for the regression tasks. Our use of approximation theory not only offers an elegant reason as to why these networks work but also a clever way to enhance them as well.
## Acknowledgements
The work of SQ is supported by the Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR) through the Pacific Northwest National Laboratory Distinguished Computational Mathematics Fellowship (Project No. 71268). The work of AD, MV, AE, and TC were partially supported by the Mathematics for Artificial Reasoning in Science (MARS) initiative via the Laboratory Directed Research and Development (LDRD) Program at PNNL. The work of PS is partially supported by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research program under the Scalable, Efficient and Accelerated Causal Reasoning Operators, Graphs and Spikes for Earth and Embedded Systems (SEACROGS) project (Project No. 80278). Pacific Northwest National Laboratory is a multi-program national laboratory operated for the U.S. Department of Energy by Battelle Memorial Institute under Contract No. DE-AC05-76RL01830. |
2305.06842 | Emotion Recognition for Challenged People Facial Appearance in Social
using Neural Network | Human communication is the vocal and non verbal signal to communicate with
others. Human expression is a significant biometric object in picture and
record databases of surveillance systems. Face appreciation has a serious role
in biometric methods and is good-looking for plentiful applications, including
visual scrutiny and security. Facial expressions are a form of nonverbal
communication; recognizing them helps improve the human machine interaction.
This paper proposes an idea for face and enlightenment invariant credit of
facial expressions by the images. In order on, the person's face can be
computed. Face expression is used in CNN classifier to categorize the acquired
picture into different emotion categories. It is a deep, feed-forward
artificial neural network. Outcome surpasses human presentation and shows poses
alternate performance. Varying lighting conditions can influence the fitting
process and reduce recognition precision. Results illustrate that dependable
facial appearance credited with changing lighting conditions for separating
reasonable facial terminology display emotions is an efficient representation
of clean and assorted moving expressions. This process can also manage the
proportions of dissimilar basic affecting expressions of those mixed jointly to
produce sensible emotional facial expressions. Our system contains a
pre-defined data set, which was residential by a statistics scientist and
includes all pure and varied expressions. On average, a data set has achieved
92.4% exact validation of the expressions synthesized by our technique. These
facial expressions are compared through the pre-defined data-position inside
our system. If it recognizes the person in an abnormal condition, an alert will
be passed to the nearby hospital/doctor seeing that a message. | P. Deivendran, P. Suresh Babu, G. Malathi, K. Anbazhagan, R. Senthil Kumar | 2023-05-11T14:38:27Z | http://arxiv.org/abs/2305.06842v1 | # Emotion Recognition for Challenged People Facial Appearance in Social using Neural Network
###### Abstract
Human communication is the vocal and non-verbal signal to communicate with others. Human expression is a significant biometric object in picture and record databases of surveillance systems. Face appreciation has a serious role in biometric methods and is good-looking for plentiful applications, including visual scrutiny and security. Facial expressions are a form of nonverbal communication; recognizing them helps improve the human-machine interaction. This paper proposes an idea for face and enlightenment invariant credit of facial expressions by the images. In order on, the person's face can be computed. Face expression is used in CNN (Convolutional Neural Network) classifier to categorize the acquired picture into different emotion categories. It's a deep, feed-forward artificial neural network. Outcome surpasses human presentation and shows poses alternate performance. Varying lighting conditions can influence the fitting process and reduce recognition precision. Results illustrate that dependable facial appearance credited with changing lighting conditions for separating reasonable facial terminology display emotions is an efficient representation of clean and assorted moving expressions. This process can also manage the proportions of dissimilar basic affecting expressions of those mixed jointly to produce sensible emotional facial expressions. Our system contains a pre-defined data set, which was residential by a statistics scientist and includes all pure and varied expressions. On average, a data set has achieved 92.4% exact validation of the expressions synthesized by our technique. These facial expressions are compared through the pre-defined data-position inside our system. If it recognizes the person in an abnormal condition, an alert will be passed to the nearby hospital/docor seeing that a message.
- 19 May 2022 05 June 2022 207 April 2022 19 May 2022 07 April 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 202 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 20222 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 20222 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 20222 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19 May 2022 19
is process independently [22]. This will helps us in dropping the sound in the synthesized facial. This facial appearance prepares the XM and subsequent learning method of Self Organizing Map (SOM). Imagine that every expression can modify in appearance commencing with the look of a nonaligned face. It might be changing and saved as a model in the node [11]. The reason for an objective function that chooses an exact model of a join the train XM in a given sensation and a face and impartial target facial emergence image [5]. To look on behalf of the merger of two essential emotions. Our progression takes a partial grouping of two methods represented through each node. Accordingly, the communicative face from end to end adds together. The grouping of patterns toward the object of the face image looks sensible with a conserved focus to identify the image.
## 2 Literature Survey
Each face is an ordinary signal used by a human and can be expressed depending upon their mood. A set of attempts to build a model in a facial expression analysis [2]. That has been requested in numerous fields like robotics, gaming, and medical facilities to help the system [10]. For this reason, in the twentieth century [11], Ekman has to define how different types of emotions are explained. Society as a human life with different types of expressions like irritation, fear, happiness, sad, dislike, disgust, and surprise. Here, an existing reading is going on facial appreciation and performance in a dataset to be established [6].In current years, computers have more and more powerful computing control and huge data sets [7, 8]. Machine learning algorithms are compared to traditional methods [12]. The machine learning algorithm integrates two feature extraction processes [5] and classification [13]. The operation process can mechanically extract the internal facial appearance of the sample data [15], has dominant feature extraction capabilities, and is related to computer vision (CV). Computers can simply identify face expressions [14] along with determining personnel and including the amusement like social media, content-based system, fairness, and the healthcare system. Here are different approaches, such as wavelet and coefficients [1]. Zhang has explained in this paper that a lesser resolution (64x64) is enough [18]. Every human sensation is capable of the image segregated into different levels such as joy, unhappiness, repulsion, irritation, fright, and shock [19].
Meanwhile, the working mechanism is enhanced by combining the performance's image, voice, and textual data in a series of competitions. Mollahosseini has explained [22, 23] that the purpose of deep learning in the CNN algorithm can be an accessible database. Later than extracting the face from the data set, each image was reduced to (48x48) pixels [20]. Every structural design contains different layers and adds two origin styles in this method [17]. There is a convolution layer dimension consisting of 1x1, 3x3, and 5x5. It can be represented by the ability to use the network varies from system to system. While increasing the dimension of the image and the performance of the network layer is low. Those techniques are also possible to decrease the over-fitting problem [24]. In the crash of data processing, all types of networks are used to find the performance and face image classification technique [25]. The purpose of a new CNN algorithm is to detect AU faces in a network [9]. Here, two convolution layers are used to find the max-pooling in a connected layer. The numbers are used to activate and detect the parts of the image, which was explained by Yolcu [12]. After getting the image classification into the CNN algorithm, they can crop the period and find the key value. The iconic expression can be obtained from every image analyzed by employing the CNN algorithm to perceive face image appearance.
## 3 Facial Expression Sigmoid Function
The block diagram Fig.1 represents the essential and varied expressions synthesized through the frame separation approach. Observing the synthesized CNN training is taken as an input for feature extraction of the target image. Different steps have been taken for the alert message to pass to the hospital. The CNN classifier will accept the training data and check the different types of facial expressions by using the XM classifier.
The synthesized expressions can be appeared in wrinkles, furrows, and teeth and look ordinary on the face shape of the object. Here the output of F(x) is a function concerning Z2, and the beginning value is +1, and the end of the value is -1, which is an exponent of expression e\({}^{\text{x}}\) in Z2. It will appear in the calculation chart using frontward propagation and reverse propagation, and the result is simply a Sigmoid of Z2. Thus, \(\partial\)O/OZ\({}_{2}\) is efficiently derived from the function of Sigmoid(x).
Figure 1: Architecture flow diagram
\begin{tabular}{|c|} \hline F(x) = (1+e\({}^{\text{x}}\))\({}^{-1}\)[1-(1+e\({}^{\text{x}}\))\({}^{-1}\)] \\ F(x) = sigmoid(x) (1-sigmoid(x)) \\ \(\widehat{\sigma}\)O/\(\widehat{\sigma}\)Z\({}_{2}\) = (O) (1- O) \\ \hline \end{tabular}
In caring information [13], a convolution of the network is classified in a group of bottomless neural networks. Most of them regularly work to evaluate and illustrate all images [11]. It can be identified and shift to variant or gap invariant in networks. So that the inactive of the shared-weights plan and transformation of all types of network characteristics. It contains applications within any image classifications to be analyzed by the well-defined network topology.[9, 14], and the economic circumstance chain is summarized in Section VI. The future algorithm requires only one face-neutral picture of the object as an individual. Related workings will be presented in the subsequent section [6, 10]. Section III deals with the quality of the image partitioning method have been discussed. This method can be grouped and explained in section IV, and section V gives output results [29]. Figure. 2, shown below, represents the face image using the facial image classification compared with the existing mechanism in a similar part of the image classification analysis.
## 4 Classification Analysis & Probability
Individual steady space is an approach to face appreciation under uncontrolled conditions. Here usually exist many variations within face images taken under uncontrolled conditions, such as modifying their face, illumination, air, etc. Most of the previous plants are on face recognition, focus on exacting variations, and frequently assume the presence of others. This paper directly deals with face recognition below unrestrained conditions of the classifier[27]. The solution is the individual stable space (ISS), which only expresses private characteristics. A neural network name is planned for a rare face image keen on the ISS. Later on, three ISS-based algorithms are considered for FR below unrestrained conditions. There are no restrictions used for the images fed into these algorithms. In addition, to the different methods used, they do not need additional training to be tested [28]. These advantages construct them sensible to apply below-level unrestrained circumstances.
The existing algorithms are experienced on three huge face databases with a massive difference and understand greater performance than the existing FR techniques. This paper has explained a facade appreciation process that will appear at the top of PCA (Principal Component Analysis) and LDA (Linear Discriminated Analysis). The technique consists of two processes: initial, we plan the appearance picture as of the original vector space to an appearance subspace via PCA; succeeding us use LDA to attain a most excellent linear classifier. The fundamental design of combining PCA and LDA is to improve the simplification capacity of LDA when only a small number of samples per set are presented. Using PCA, we can build a face subspace during, which we apply, LDA to execute classification. The use of the FERET dataset can express a significant enhancement when primary components quite than unusual similes are fed in the direction of the LDA classifier.
\[\Pr(Y=k\mid X=x)=\frac{pi_{k}f_{k}(x)}{\Sigma_{l=1}^{K}pi_{k}f_{k}(x)}\]
Using the above formula reduced the dimension of the data points and image classification. However, the predictable data can be used to construct a partition of the image using Bayes' theorem. Let us assume that the value range is denoted by X. Let X = (x\({}_{1,X}\).2...x\({}_{p}\)) be derived from a multivariate Gaussian distribution. Here K is the number of data modules, Let Y is the response variable in Pi\({}_{k}\) is given an observation, and it is associated in K\({}^{\text{th}}\) class. The value Pr(X=x|y=k) is the number of possible functions. Let f\({}_{k}\)(x) be the big value if there is an elevated probability of an observation sample in the K\({}^{\text{th}}\) position of the variable X=x. The cross classifier with PCA and LDA provides a useful framework for other image recognition tasks.
### Personalizing conservative composition Recommendation
Though a fan of traditional music was established to be below represented on top of social media and song stream platforms, they represent a significant target for the music recommender system. So we focus on this cluster of viewers and examine a large array of suggestion approaches and variants for the job of song artistce commendation. Inside the grouping of traditional music viewers, promote the assessment categorize users according to demographics and sequential music utilization manners. We describe the outcome of the beginning suggestion experiment and insight gained on behalf of the listener set less than thought.
Figure 2: Functional diagram of a neural network model
### _Music personalized Reference System Based on Hybrid Filtration_
Due to the tune's range and fuzziness and the music melody's correctness, the recommendation algorithm employing peak accuracy cannot completely match the user's analysis. For such difficulty, this paper proposes a cross-reference algorithm based on the joint filter algorithm and harmony geneses and designs an adapted music proposal system [27]. The scheme's first computer suggestion consequences according to the shared filter algorithm and realizes the potential benefit to the customer. Then every music is biased by liking on top of the genes of composing music. Later than load selection, the song with earlier preference is taken as a suggestion [25]. Lastly, two suggested outcomes were performed, weighted grouping and filter to make commendation. The investigational data point out the enhanced method can raise the correctness of recommendations and meet users' demands from different levels.
## 5 Implementation
### _Input Video_
The live video taken from the camera is taken as the input video.
### _Frame Separation_
Surround processing is the first step in the environment subtraction algorithm. This step schemes to classify the customized video frames by removing noise and unnecessary items in the frame to increase the quantity of information gained from the frame. Preprocessing an image is a method of collecting easy image processing tasks that modify the uncooked input video information into a system. It can be processed by following steps. Preprocessing the record is essential to improve the finding of touching objects, For example, by spatial and earthly smoothing, snow as disturbing plants on a tree.
### _Image pre-processing_
* Image Representation is mainly classified into the following terms.
* Import the image using acquisition tools;
* Analyzing and testing the image;
* Output can be reported, which is based on analyzing that image.
### _Elimination_
Feature mining is a type of dimensionality decrease that proficiently represents as an image compact vector technique. This approach is useful when large image sizes are reduced based on the required tasks such as illustration, matching, and retrieval.
### _Database_
The database contains a pre-defined face pattern from feature pulling out with which the user's face is compared and emotion is detected.
### _CNN Algorithm_
**Step-1:** frame = camera. read()
**Step-2** if (frame = imutils.resize(frame, width=500))
Assign new frame=gray
**Step-3** Detection of face
Faces=face_detection.detectMultiscale(gray, scalefactor=1,0, Minneapolis=12,minsize=(60,60),flags=cv2.cascadescale_image)
**Step-4** If(canvas=np_zeros((500, 700, 3),type="uint12")) then assign frame=newframe
frameClone = frame.copy()
**Step-5**
If(len(faces)>0))
Set the value 0, 1; faces = sorted(faces, reverse=True,
**Step-6** compares the number of key value in array
Key1=lambda_x,(x[3]-x[0])*(x[1]-x[0])
If(Fx,Fy,Fh)=vces
**Step-7** To get the output of image color from grays_cale image, and resize to be fixed size in (28x28)pixels
**Step-8**
To assign the ROI values for each classification via the CNN
Roi=gray[fY:fY+ff,fX:fX+fW]
Roi=img_array(roi)
**Step-9** To get the dimension of the image size
Roi=roi.type("float") / 255.0
**Step-10** Find the ROI and probability
Roi=np.expand_dims(roi,axis==0)
Pre=emotion_classifier.predict(roi)[0,1]
**Step-11**
Label1=emotions["angry","disgust","scared","happy", "sad", "surprised", "neutral"]
if label=='happy'
VarHappy=VarHappy+1
**Step-12** check the type of emotion
if label=='sad':
VarSad=VarSad+1
If(VarSad)>'Theresh:
if label=='angry':
Varangry=Varangry+1
ifVarangry>'Thresh:
**Step13** To check the classification
if label=='surprised':
Varsurprised=Varsurprised+1
ifVarsurprised>Thresh:
if label=='disgust':
Vardsigust=Vardsigust+1
If Vardsigust>Thresh:
### Classification
Artificial neural networks are used in various classification work like image and audio. Different types of neural networks are used, from predicting the series of images to using regular neural networks. In particular, an LSTM, in the same way for image classification, uses of convolution neural network. This algorithm will intellect the face of emotions and send the mail to the consumer when irregular facial emotions are found.
## 6 Result and Analysis
The above Fig.3 is a neutral face of the result, here angry=0.82%, diggust=0.15%, scared=7.89, happy=22.18%, sad=8.10%, surprised=1.33%, neutral = 53.85%, so the neutral value is higher than the other attributes.
The graph Fig.4 shows the performance and comparison using the facial classifier technique, here tressed=2.6, sleepy=2.68, tired=3.08, walking=2.36, wake up 3.24, coordination =2.224 and fall as sleep =2.24, so the final output of the tired is the maximum percentage.
Figure 3: Neutral face |
2310.08429 | Revisiting Data Augmentation for Rotational Invariance in Convolutional
Neural Networks | Convolutional Neural Networks (CNN) offer state of the art performance in
various computer vision tasks. Many of those tasks require different subtypes
of affine invariances (scale, rotational, translational) to image
transformations. Convolutional layers are translation equivariant by design,
but in their basic form lack invariances. In this work we investigate how best
to include rotational invariance in a CNN for image classification. Our
experiments show that networks trained with data augmentation alone can
classify rotated images nearly as well as in the normal unrotated case; this
increase in representational power comes only at the cost of training time. We
also compare data augmentation versus two modified CNN models for achieving
rotational invariance or equivariance, Spatial Transformer Networks and Group
Equivariant CNNs, finding no significant accuracy increase with these
specialized methods. In the case of data augmented networks, we also analyze
which layers help the network to encode the rotational invariance, which is
important for understanding its limitations and how to best retrain a network
with data augmentation to achieve invariance to rotation. | Facundo Manuel Quiroga, Franco Ronchetti, Laura Lanzarini, Aurelio Fernandez-Bariviera | 2023-10-12T15:53:24Z | http://arxiv.org/abs/2310.08429v1 | # Revisiting Data Augmentation for Rotational Invariance in Convolutional Neural Networks
###### Abstract
Convolutional Neural Networks (CNN) offer state of the art performance in various computer vision tasks. Many of those tasks require different subtypes of affine invariances (scale, rotational, translational) to image transformations. Convolutional layers are translation equivariant by design, but in their basic form lack invariances. In this work we investigate how best to include rotational invariance in a CNN for image classification. Our experiments show that networks trained with data augmentation alone can classify rotated images nearly as well as in the normal unrotated case; this increase in representational power comes only at the cost of training time. We also compare data augmentation versus two modified CNN models for achieving rotational invariance or equivariance, Spatial Transformer Networks and Group Equivariant CNNs, finding no significant accuracy increase with these specialized methods. In the case of data augmented networks, we also analyze which layers help the network to encode the rotational invariance, which is important for understanding its limitations and how to best retrain a network with data augmentation to achieve invariance to rotation.
Keywords:Neural Networks, Convolutional Networks, Rotational Invariance, Data Augmentation, Spatial Transformer Networks, Group Equivariant Convolutional Networks, MNIST, CIFAR10
## 1 Introduction
Convolutional Neural Networks (CNNs) currently provide state of the art results for most computer vision applications [_Dieleman et al., 2016_]. Convolutional layers learn the parameters of a set of FIR filters. Each of these filters can be seen as a weight-tied version of a traditional feedforward layer. The weight-tying is performed in such a way that the resulting operation exactly matches the convolution operation.
While feedforward networks are very expressive and can approximate any smooth function given enough parameters, a consequence of the weight tying scheme is that convolutional layers do not have this property. In particular, traditional CNNs cannot deal with objects in domains where they appear naturally rotated in arbitrary orientations, such as texture recognition _[Marcos et al. 2016]_, handshape recognition _[Quiroga et al. 2017]_, or galaxy classification _[Dieleman et al. 2015]_.
Dealing with rotations, or other set of geometric transformations, requires the network to be invariant or equivariant to those transformations. A network \(\mathbf{f}\) is invariant to a transformation \(\mathbf{\varphi}\) if transforming the input to the network \(\mathbf{x}\) with \(\mathbf{\varphi}\) does not change the output of the network, that is, we have \(\mathbf{f}\left(\mathbf{\varphi(x)}\right)=\mathbf{f}\left(\mathbf{x}\right)\) for all \(\mathbf{x}\). A network is equivariant to a transformation \(\mathbf{\varphi}\) if its output changes _predictably_ when the input is transformed by \(\mathbf{\varphi}\). Formally, it is equivariant if there exists a smooth function \(\mathbf{\varphi}\)' such that for all \(\mathbf{x}\), we have \(\mathbf{f}\left(\mathbf{\varphi(x)}\right)=\mathbf{\varphi}\)'(\(\mathbf{f}\left(\mathbf{x}\right)\)) [Dieleman et al. 2016].
Depending on the application, invariance and/or equivariance may be required in different layers of the network.
While traditional CNNs are translation equivariant by design _[Dieleman et al., 2016]_, they are neither invariant nor equivariant to other types of transformations in usual training/usage scenarios. There are two basic schemes for providing rotation invariance to a network: augmenting the data or the model.
Data-augmentation is a very common method for achieving invariance to geometric transformations of the input and improving generalization accuracy. Invariance and equivariance to rotations via data augmentation has been studied for Deep Restricted Boltzmann Machines _[Larochelle et al., 2007]_ as well as HOGs and CNNs _[Lenc and Vedaldi, 2014]_. These results show evidence in favour of the hypothesis that traditional CNNs can learn automatically equivariant and invariant representations by applying transformations to their input. However, these networks require a bigger computational budget to train since the transformation space of the inputs must be explored by transforming them.
Other approaches modify the model or architecture instead of the input to achieve invariance. For example, some modified models employ rotation invariant filters, or pool multiple predictions made with rotated versions of the input images. However, it is unclear how and whether these modifications improve traditional CNNs trained with data augmentation, both in terms of efficiency and representational power. Furthermore, the mechanisms by which traditional CNNs achieve invariances to rotation is still poorly understood, and how as well as how best to augment data to achieve rotational invariance.
This paper compares modified CNNs models with data augmentation techniques for achieving rotational invariance for image classification. We perform experiments with various well understood datasets (MNIST, CIFAR10), and provide evidence for the fact that despite clever CNNs modifications, data augmentation is still necessary with the new models and paired with traditional CNNs can provide similar performance while remaining simpler to train and understand.
## 2 Review of Convolutional Neural Networks Models with Rotational Invariance
In this subsection we review the literature on modified CNN models for rotation invariance.
Many modifications for CNNs have been proposed to provide rotational invariance (or equivariance) [12, 13, 14, 15, 16, 17].
Some researchers claim that for classification it is usual to prefer that the lower layers of the network encode equivariances, so that multiple representations of the input can coexist, and the upper layers of the network encode invariances, so that they can collapse those multiple representations in a useful fashion [13]. In this way, we can make the network learn all the different orientations of the object as separate entities, and then map all those representations to a single class label.
Alternatively, we can add an explicit image reorientation scheme that is applied to the image before passing it as an input to the network. In this way, the network can ignore the rotation of the object and learn a representation in a unique, canonical orientation, which simplifies the network design. However, this requires an additional model that can predict the orientation of the object.
The first approach puts the invariance near the output; the second puts it near the input by making the input layer rotation invariant. Moreover, for some objects we desire not just whole-image rotation invariance, but also invariance for some of the object's parts. For example, the arms of a person may rotate around the shoulders. It is clear that for these types of problems making the input invariant to global rotations is insufficient.
The following subsection reviews modified CNN models that deal with rotation (in,equi)-variance. We divide them into two groups: those that attempt to deal with the rotation problem globally by **transforming the input image or feature map**, and those that propose to **modify the convolution layer** in some sense to produce equivariant features and, optionally, a way to turn those features into invariant ones.
### Transformation of the input image or feature map
_Spatial Transformer Networks (STN) [12]_ defines a new Spatial Transformer Layer (STL) (Figure ) that can learn to rotate input feature maps to a canonical orientation so that subsequent layers can focus on the canonical representation. Actually, STLs can also learn to correct arbitrary affine transformations by employing a sub-network that takes the feature maps as a parameter and outputs a 6-dimensional vector that encodes the affine transformation parameters. The transformation is applied via a differentiable bilinear interpolation operation. While typically the STLs are added as the first
layer of the network, the layers are modular and can be added at any point in the network's convolutional pipeline.
_Deep Symmetry Networks (DSN) [14]_ also transforms the image prior to convolution and max-pools the results, but adds an iterative optimization procedure over the 6-dimensional space of affine transformation to find a transformation that maximally activates the filter, mixing of ideas from TIP and STN. In spirit, their approach is similar to the STN approach but the optimization procedure is less elegant than the jointly trained localization network and could be seen as well as a form of data augmentation. They compare against pure data augmentation in the MNIST and NORB datasets and find that while DSNs has better performance when training with less than 10000 samples, data augmentation achieves the same performance at that point.
_Transformation-Invariant Pooling (TIP) [10]_ proposes to define a set of transformations \(\mathbf{\Phi}\) = {\(\mathbf{\varphi[1]}\),..., \(\mathbf{\varphi[n]}\)} to which the network must have invariance, and then train a siamese network with a subnetwork \(\mathbf{N[i]}\) for each transformation \(\mathbf{\varphi[i]}\). The input to the i-th subnetwork is \(\mathbf{\varphi[i](x)}\), that is, subnetworks share parameters but each is fed with an input transformed in a different way. A max-pooling operation is performed on the vector of outputs of the siamese network for each class. In this way the output of the network is still a vector of probabilities, one for each class. This pooling operation is crucial since it provides the invariance needed; before that operation, the representation would be (at most) equivariant.
The set of transformations \(\mathbf{\Phi}\) is user-defined and can include a set of fixed rotations; the authors show that whenever \(\mathbf{\Phi}\) forms a group then the siamese network is guaranteed to be invariant to \(\mathbf{\Phi}\) (assuming the input images are only affected by that set of transformations, else the network will be _approximately_ invariant). TIP can be viewed as a form of test-time data augmentation, that prepares the network by data-augmentating the training of the feature extraction part of the network.
Figure : Architecture of a Spatial Transformer Layer from [10]. The layer transform feature maps \(\mathbf{U}\) to feature maps \(\mathbf{V}\) by applying an affine transform \(\mathbf{T}\). The parameters of the affine transform \(\mathbf{\theta}\) are predicted by a localization network.
### 2.2 Modifications of the convolution layer
_Flip-Rotate-Pooling Convolutions (FRPC) [Wu et al., 2015]_ extends the convolution layer by rotating the filter instead of the image. In this way, a oriented convolution generates additional feature maps by rotating traditional convolutional filters in **n*r** fixed orientations (**n*r** is a parameter). Then, max-pooling along the channel dimension is applied to the responses of the **n*r** orientations so that a single feature map results. No comparison to data augmentation approaches was performed. While the number of parameters of the layer is not increased, the runtime memory and computational requirements of the layer are multiplied by **n*r**, although the number of parameters for the full network can actually be reduced since the filters are more expressive.
The same approach is used in [Marcos et al., 2016] for texture classification; they do perform comparisons with data augmentation but only for 20 samples. Given that the rotation prior is very strong in texture datasets, this is an unfair comparison.
_Exploiting Cyclic Symmetry in CNNs [Dieleman et al., 2016]_ presents a method similar to FPRC alongside variants that provide equivariance as well as invariance. _Oriented response networks (ORN) [Zhou et al., 2017]_ are related to FPRC, but also introduce an ORAlign layer that instead of pooling the set of features maps generated by a rotating filter reorders them in a SIFT-inspired fashion which also provides invariance.
_Dynamic Routing Between Capsules [Sabour et al., 2017]_ presents a model are units of neurons designed to mimic the action of cortical columns. Capsules are designed to be invariant to complicated transformations of the input. Their outputs are merged at the deepest layer, and so are only invariant to global transformation
_Group Equivariant Convolutional Networks_ (GCNN) and _Steerable CNNs [Cohen and Welling, 2016a,Cohen and Welling, 2016b]_ also use the same basic methods but provide a more formal theory for guaranteeing the equivariance of the intermediate representations by viewing the set of transformations of the filters as a group. _Learning Steerable Filters for Rotation Equivariant CNNs [Weiler et al, 2018]_ also employs the same approach. _Spherical CNNs [Cohen et al, 2018]_ extend this approach to the 3D case.
In particular, Group CNNs [Cohen and Welling, 2016a]_ adds an additional _rotation_ dimension to the convolutional layer. This dimension allows to compute rotated versions of the feature maps in \(0^{\circ}\), \(90^{\circ}\), \(180^{\circ}\) and \(270^{\circ}\) orientations, as well as their corresponding horizontally flipped versions. The first convolution _lifts_ the image channels by adding this dimension; further convolutions compute the
convolution across all channels and rotations, so that the filter parameters are of size **Channels*Rotations*H*W**, where H and W are the height and width of the filter. The bias term is the same for all rotation dimensions. To _return_ to the normal representation of feature maps, the rotation dimension is reduced via a max operation, obtaining invariance to rotation.
Figure Transformations of a filter made by a Group CNN from _[Cohen and Welling, 2016a]_. Filters are rotated and flipped and then applied to the feature maps. The rotation and flip operations form an algebraic group.
We note that all previous models are variations of the same strategy: augment convolutions with equivariance to a finite set of transformations by rotating or adapting the filters, then provide a method to collapse the equivariance into an invariance before the final classification.
_Deformable Convolutional Networks [Dai et al., 2017]_ learns filters of arbitrary shape. Each position of the filter can be arbitrarily spatially translated, via a mapping that is learned. Similarly to STN, the employ a differentiable bilinear filter strategy to sample from the input feature map. While not restricted to rotation, this approach is more general than even STN since the transformation to be learned is not limited by any affine or other priors; hence we do not consider it for the experiments.
## 3 Experiments
We performed two types of experiments understand data augmentation for rotational invariance and compare it with other methods. We used the MNIST and CIFAR10 datasets (Figure 1) in our experiments [Le Cunn et al. 1998, Alex Krizhevsky et al. 2009] because these are well known and the behavior of common networks such as those tested here is better understood than for other datasets, and cover both synthetic grayscale images and RGB natural images.
The data augmentation we employed consists of rotating the input image by a random angle in the range [0\({}^{\circ}\),360\({}^{\circ}\)]. For the MNIST dataset, previous works employ the MNISTrot version in which images are rotated in 8 fixed angles. While using a discrete set of rotations allows some models to learn a fixed set of filters for guaranteed equivariance, we chose to use a continuous set of rotations because it better reflects real-world usage of the methods. We tested only global rotations; ie, rotations of the whole image around the center.
In all experiments networks were trained until convergence by monitoring the test accuracy, using the ADAM optimization algorithm with a learning rate of 0.0001 and 1E-9 weight decay for all non-bias terms.
### Network models
We employed a simple convolutional network we will call **SimpleConv**, that while it clearly does not provide state of the art results, is easy to understand and well. The simple convolutional network for is defined as: Conv1(F) - Conv2(F) - MaxPool1(2x2) - Conv3(F*2) - Conv4(F*2) - MaxPool2(2x2) - Conv5(F*4) - FC1(D) - Relu - BatchNorm - FC(10), where all convolutional filters are 3x3, and there is a ReLU activation function after each convolutional and fully connected. F is the number of feature maps of the convolutional layers and D the hidden neurons of the fully connected layer. For MNIST, we set F=32 and D=64, while for CIFAR10 F=64 and D=128, matching the original Group CNN implementation (see below).
To test the importance of the Dense layers in providing invariance to rotated samples, we also experimented with an **AllConvolutional** network. This model uses just convolutions and pooling as building blocks, which is of interest to our
Figure 1: **Rows 1,2) MNIST and rotated MNIST images. 3,4) CIFAR10 and rotated CIFAR10 images. Note that for CIFAR10 some images are slightly cropped by the rotation procedure, while for MNIST the cropping is negligible due to the black border.**
The AllConv network is defined as: Conv1(F) - Conv2(F) - Conv3(F,stride=2x2) - Conv4(F*2) - Conv5(F*2) - Conv6(F*2,stride=2x2) - Conv7(F*2) - Conv8(F*2, 1x1) - ConvClass(10, 1x1) - GlobalAveragePooling(10). Again, all convolutional filters are 3x3, and we place ReLUs after convolutions. For MNIST, F=16 while for CIFAR10, F=96, again matching the original Group CNN implementation.
Then, we chose a model from the two groups described in section 2 - transformation of the input image and transformation of the filters - that also correspond to the alternative strategies of putting the invariance near the input or near the output.
As a representative of the first group we added a **Spatial Transformer Layer**[15] to the convolutional network to reorient the image before the network classifies it; the resulting networks are named **SimpleConvSTN** and **AllConvolutional STN**. To keep comparisons fair, we modified the localization and affine matrix to restrict transformations to rotations. The localization network consists of a simple CNN with layers: Conv1(16,7x7) - MaxPool1(2x2) - ReLU() - Conv2(16,5x5) - MaxPool2(2x2) - ReLU() - FC(32)
From the modified convolutional methods, we chose **Group CNNs**[11]**_and Welling, 2016a_]**. The resulting networks are named **SimpleGConv** and **AllGConvolution**, where we simply replaced normal convolutions for group convolutions, and we added a pooling operation before the classification layers to provide the required invariance to rotation. Group CNNs has 4 times the feature maps as a regular CNN network; to compensate, and as a compromise, we reduced the parameters by half.
### Data augmentation with traditional networks
First we measured the performance of a SimpleConv and AllConvolutional with and without data augmentation to obtain a baseline for both base methods.
We trained two instances of each model; one with the normal dataset, and the other with a data-augmented version. We then tested each instance of each model with the test set of the normal and data-augmented variant. Figure 2 shows the results of the experiments for each model/dataset combination.
On MNIST, we can observe that while networks trained with a rotated dataset see a drop in their accuracies of 1%-2% on the unrotated test set, the performance drop for networks trained with the unrotated dataset and tested with the rotated dataset fare much worse (\(\sim\)55% drop in accuracy). It is surprising that the drop in the first case is so low, specially given that the number of parameters is the same for both networks. This may indicate a redundancy in the filters of the unrotated model. Also, it would seem that the network trained on the unrotated dataset still performs at a \(\sim\)40% accuracy level, four times more than expected by chance (10%). This is partly because some of the samples are naturally invariant to rotations such as the images for the number 0 or invariant to some rotations such as the numbers 1 or 8, and because some of the learned features must be naturally invariant to rotation as well.
In the case of CIFAR10 the results show a similar situation, although the drop in accuracy from unrotated to rotated is larger, possibly because the dataset possesses less natural invariances than MNIST. Note that to reduce the burden of computations the number of training epochs on CIFAR10 with AllConvolutional was reduced, achieving \(\sim\)80% accuracy instead of 91% as in the authors original experiments [3].
Still, it is surprising that the AllConvolutional networks can learn the rotated MNIST dataset so well, since convolutions are neither invariant nor equivariant to rotation. This points to the fact that the set of filters learned by the network possibly self-organize during learning to obtain a set of filters that can represent all the rotated variations of the object.
### Comparison with STN and Group CNN
We ran the same experiment as before but using the modified versions of the network. Figure 3 shows the results of the STLs versions of the networks.
Figure 2: **Rows 1,2) SimpleConv and AllConvolutional with MNIST. 3,4) SimpleConv and AllConvolutional with CIFAR10. Left Accuracy and loss for each training epoch, on training and test set. Middle Same as left, but with a rotated training set. Right Final test set accuracies for the two models trained and the two variations of the dataset (unrotated, rotated).**
We can see that in all cases the performance of the unrotated model on the rotated dataset is much lower than the original; this is expected since the STL needs data augmentation during training. However, the STL model did not perform noticeable better than the normal data-augmented models (Figure 2).
Figure 4 shows as well the results on MNIST and CIFAR10 of the Group CNN models. Similarly to the STN case, the performance is not increased with Group CNN: however, in the case of the AllGConvolutional network we see that the
Figure 3: **Rows 1,2) SimpleConvSTN and AllConvolutionalSTN with MNIST**. 3,4) SimpleConvSTN and AllConvolutionalSTN with CIFAR10. **Left** Accuracy and loss for each training epoch, on training and test set. **Middle** Same as left, but with a rotated training set. **Right** Final test set accuracies for the two models trained and the two variations of the dataset (unrotated, rotated)
performance of the model trained with the unrotated dataset on the rotated dataset is much greater (+0.1) than for other models, while the same does not happen with the SimpleGConv network. This is possibly due to the fact that the superior representation capacity of the fully convolutional layers can compensate for absence of good filters, while in the AllGConvolutional case there is more pressure on the convolutional layers to learn good representations.
Figure 4: **Rows 1,2) SimpleGConv and AllGConvolutional with MNIST. 3,4) SimpleGConv and AllGConvolutional with CIFAR10. Left Accuracy and loss for each training epoch, on training and test set. Middle Same as left, but with a rotated training set. Right Final test set accuracies for the two models trained and the two variations of the dataset (unrotated, rotated)**
While section 3.2 seems to point to the fact that there is small difference in accuracy when using specialized models versus data augmentation, it could be argued that specialized models can be more efficient when training. Alternatively, if training time is a limiting factor, we could use a pretrained network and retrain some of its layers to achieve invariance. However, a priori it is not clear whether the whole network can/needs to be retrained, or if only some parts of it need to adapt to the rotated examples.
To analyze which layers are amenable to retraining for rotation invariance purposes, we train a base model with an unrotated dataset, then make a copy of the network and retrain parts of it (or all of it) to assess which layers can be retrained.
Figure 5: Retraining experiments for SimpleConv and AllConvolutional on MNIST. Accuracy on the test sets after retraining subsets of layers of the network. The labels “conf”/“all_conv” and “fc” mean that all convolutional or fully connected layers were retrained.
As Figure 5 shows, retraining for rotation invariance shows a similar trend to retraining for transfer learning; higher layers with more high-level features are a better target for retraining individually since they are closer to the output layers and can affect them more. The fact that retraining the penultimate layers can bring performance back means that there's a redundancy of information in previous layers since rotated versions of the images can be reconstructed from the output of the first layers. Since the original networks were trained with unrotated examples, this means that either that network naturally learns equivariant filters, or that equivariance in filters is not so important for classifying rotated objects.
However, it is surprising that retraining the final layer in both cases leads to a reversal of this situations: the performance obtained is less than when retraining other layers. In the case of the SimpleConv network, this is possibly due to the action of previous fc layer collapsing the equivariances before the final layer can translate them to a decision; that is, the fc1 layers must be loosing _some_ information. In the case of the AllConvolutional network, the class_conv layer performs a simple 1x1 convolution to collapse all feature maps to 10, and so probably cannot recapture the invariances, while the retraining the previous layer with many more 3x3 convolutions can.
Figure 6 shows the results of the same experiment on CIFAR10. The results are similar to those of MNIST, except for the lower general accuracy given the difficulty of CIFAR10.
## 4 Conclusions
Rotational invariance is a desired property for many applications in image classification. Data augmentation is a simple way of training CNNs models, which currently hold the state of the art, to achieve invariance. There are many modified CNNs models that attempt to make this task easier.
We compared data augmentation with modified models, maintaining the same number of parameters in each case. While data augmentation requires more training, it can reach similar accuracies as other methods. Furthermore, the test time is not affected by additional localization networks or convolutions, and
We also performed retraining experiments with data augmentation to shed some light on how and where can network learn rotational invariance. By retraining layers separately, we found that some invariance can be achieved added in every layer, although the ones nearest the end of the network, whether fully connected or convolutional, are much better at gaining invariance. This finding reinforces the
Figure 5: Retraining experiments for SimpleConv and AllConvolutional on CIFAR10. Accuracy on the test sets after retraining subsets of layers of the network. The labels “conf”/“all_conv” and “fc” mean that all convolutional or fully connected layers were retrained.
notion that lower layers of networks learn redundant filters and so can be capitalized for other tasks (rotation invariant tasks in this case) and also the notion that invariance should be added at the end of the network if possible.
We believe more can be learned about CNNs by studying their learned invariances. Possible extensions of this work include deliberately introducing invariances early or late in the network to see how they affect the capacity of the network to learn rotations. It would be useful as well to compare the convolutional filters learned to identify equivariances between their outputs, as well as see via their activations which are rotationally invariant. Systematic experimentation on datasets where samples are rotated naturally, as well as naturally rotation invariant, and comparison to datasets where the rotation is synthetic such as those in this work are needed. To reduce the experimentation burden, we have chosen to keep the number of parameters constant in the experiments, but it would also be desirable to see the impact of data augmentation when the number of parameters is constrained.
|
2310.04213 | Topology-Aware Neural Networks for Fast Contingency Analysis of Power
Systems | Training Neural Networks able to capture the topology changes of the power
grid is one of the significant challenges towards the adoption of machine
learning techniques for N-k security computations and a wide range of other
operations that involve grid reconfiguration. As the number of N-k scenarios
increases exponentially with increasing system size this renders such problems
extremely time-consuming to solve with traditional solvers. In this paper, we
combine Physics-Informed Neural Networks with both a Guided-Dropout (GD) Neural
Network (which associates dedicated neurons with specific line
connections/disconnections) and an edge-varrying Graph Neural Neural Network
(GNN) architecture to learn the setpoints for a grid that considers all
probable single-line reconfigurations (all critical N-1 scenarios) and
subsequently apply the trained models to N-k scenarios.We demonstrate how
incorporating the underlying physical equations for the network equations
within the training procedure of the GD and the GNN architectures, performs
with N-1, N-2, and N-3 case studies. Using the AC Power Flow as a guiding
application, we test our methods on the 14-bus, 30-bus, 57-bus, and 118-bus
systems. We find that these topology-aware NNs not only achieve the task of
contingency screening with satisfactory accuracy but do this at up to 1000
times faster than the Newton Raphson power flow solver. Moreover, our results
provide a comparison of the GD and GNN models in terms of accuracy and
computational speed and provide recommendations on their adoption for
contingency analysis of power systems. | Agnes M. Nakiganda, Catherine Cheylan, Spyros Chatzivasileiadis | 2023-10-06T13:00:36Z | http://arxiv.org/abs/2310.04213v2 | # Topology-Aware Neural Networks for Fast Contingency Analysis of Power Systems
###### Abstract
Training Neural Networks able to capture the topology changes of the power grid is one of the significant challenges towards the adoption of machine learning techniques for N-\(k\) security computations and a wide range of other operations that involve grid reconfiguration. As the number of N-\(k\) scenarios increases exponentially with increasing system size this renders such problems extremely time-consuming to solve with traditional solvers. In this paper, we combine Physics-Informed Neural Networks with both a Guided-Dropout (GD) (which associates dedicated neurons with specific line connections/disconnections) and an edge-carrying Graph Neural Neural Network (GNN) architecture to learn the setpoints for a grid that considers all probable single-line reconfigurations (all critical N\(-1\) scenarios) and subsequently apply the trained models to N-\(k\) scenarios.We demonstrate how incorporating the underlying physical equations for the network equations within the training procedure of the GD and the GNN architectures, performs with N\(-1\), N\(-2\), and N\(-3\) case studies. Using the AC Power Flow as a guiding application, we test our methods on the 14-bus, 30-bus, 57-bus, and 118-bus systems. We find that these topology-aware NNs not only achieve the task of contingency screening with satisfactory accuracy but do this at 100 to 1000 times faster than the Newton Raphson power flow solver. Moreover, our results provide a comparison of the GD and GNN models in terms of accuracy and computational speed and provide recommendations on their adoption for contingency analysis of power systems.
AC Power Flow, Graph Neural Network, Guided Dropout, Network Topology, Physics Informed Neural Network
## I Introduction
The power grid is rapidly transforming and incorporating numerous devices that operate in work together to maintain a balance of supply and demand. Now more than ever, it is vital that system operators can ascertain that potentially critical contingency scenarios are promptly screened, analyzed and mitigation measures devised. Power systems today are designed with inherent N\(-1\) operational reliability, however, as networks grow larger and incorporate more devices, the N-\(k\) system security must be adequately handled such that a reliable and resilient grid can be maintained. The power grid is a separate in synchronism to maintain a balance of supply and demand. Now more than ever, it is vital that system operators can ascertain that potential critical contingency scenarios are promptly screened, analyzed and mitigation measures devised. Power systems today are designed with inherent N\(-1\) operational reliability, however, as networks grow larger and incorporate more devices, the N-\(k\) system security must be adequately managed such that a reliable and resilient grid can be maintained [1]. Moreover, if not sufficiently handled such multiple contingencies can result in voltage collapse and cascading failures [2, 3]. Traditionally, numerical methods such as Newton Raphson have served as a means to solve the power flow problem for critical contingency screening analysis, however, the necessary computing time for handling the combinatorial explosion of N-k scenarios becomes prohibitive with such techniques.
Machine Learning (ML) models including Decision Trees, Support Vector Machines, Random Forests, and Neural Networks to mention but a few have been shown to handle complex power systems problems tractably and efficiently [4, 5, 6]. These methods eliminate the computationally intensive iterative procedures of traditional power flow solvers and scale well with increasing grid sizes. However, the downside to many ML algorithms is that they are often unable to consider grid topologies beyond the one topology on which they have been trained i.e., they do not incorporate variables that relate to connection/disconnection of power lines or reconfiguration of buses. Moreover, training a single model for each potential N-\(1\) topology would also be unrealistic and impractical. This affects their ability to generalize to varying grid configurations, which is an inherent aspect of the contingency assessment problem in power systems and is, therefore, hindering their adoption in real systems.
In order to leverage the enormous computational efficiency of ML methods for application to the N-\(k\) power flow problem, various ML-based architectures have been proposed. In [7], an one-hot encoding that adds extra binary variables to represent the connection/disconnection of components was presented. However, results therein show this method may not scale well to larger systems with hundreds of components. In [7] and [8], the authors introduce the so-called "Guided Dropout" method to address the topology change problem. "Guided Dropout", _sparsifies_ the neural network model by introducing |
2301.09710 | Training End-to-End Unrolled Iterative Neural Networks for SPECT Image
Reconstruction | Training end-to-end unrolled iterative neural networks for SPECT image
reconstruction requires a memory-efficient forward-backward projector for
efficient backpropagation. This paper describes an open-source, high
performance Julia implementation of a SPECT forward-backward projector that
supports memory-efficient backpropagation with an exact adjoint. Our Julia
projector uses only ~5% of the memory of an existing Matlab-based projector. We
compare unrolling a CNN-regularized expectation-maximization (EM) algorithm
with end-to-end training using our Julia projector with other training methods
such as gradient truncation (ignoring gradients involving the projector) and
sequential training, using XCAT phantoms and virtual patient (VP) phantoms
generated from SIMIND Monte Carlo (MC) simulations. Simulation results with two
different radionuclides (90Y and 177Lu) show that: 1) For 177Lu XCAT phantoms
and 90Y VP phantoms, training unrolled EM algorithm in end-to-end fashion with
our Julia projector yields the best reconstruction quality compared to other
training methods and OSEM, both qualitatively and quantitatively. For VP
phantoms with 177Lu radionuclide, the reconstructed images using end-to-end
training are in higher quality than using sequential training and OSEM, but are
comparable with using gradient truncation. We also find there exists a
trade-off between computational cost and reconstruction accuracy for different
training methods. End-to-end training has the highest accuracy because the
correct gradient is used in backpropagation; sequential training yields worse
reconstruction accuracy, but is significantly faster and uses much less memory. | Zongyu Li, Yuni K. Dewaraja, Jeffrey A. Fessler | 2023-01-23T20:33:09Z | http://arxiv.org/abs/2301.09710v1 | # Training End-to-End Unrolled Iterative Neural Networks for SPECT Image Reconstruction
###### Abstract
Training end-to-end unrolled iterative neural networks for SPECT image reconstruction requires a memory-efficient forward-backward projector for efficient backpropagation. This paper describes an open-source, high performance Julia implementation of a SPECT forward-backward projector that supports memory-efficient backpropagation with an exact adjoint. Our Julia projector uses only \(\sim\)5% of the memory of an existing Matlab-based projector. We compare unrolling a CNN-regularized expectation-maximization (EM) algorithm with end-to-end training using our Julia projector with other training methods such as gradient truncation (ignoring gradients involving the projector) and sequential training, using XCAT phantoms and virtual patient (VP) phantoms generated from SIMIND Monte Carlo (MC) simulations. Simulation results with two different radionuclides (\({}^{50}\)Y and \({}^{177}\)Lu) show that: 1) For \({}^{177}\)Lu XCAT phantoms and \({}^{50}\)Y VP phantoms, training unrolled EM algorithm in end-to-end fashion with our Julia projector yields the best reconstruction quality compared to other training methods and OSEM, both qualitatively and quantitatively. For VP phantoms with \({}^{177}\)Lu radionuclide, the reconstructed images using end-to-end training are in higher quality than using sequential training and OSEM, but are comparable with using gradient truncation. We also find there exists a trade-off between computational cost and reconstruction accuracy for different training methods. End-to-end training has the highest accuracy because the correct gradient is used in backpropagation; sequential training yields worse reconstruction accuracy, but is significantly faster and uses much less memory.
End-to-end learning, regularized model-based image reconstruction, backpropagatable forward-backward projector, quantitative SPECT.
## I Introduction
Single photon emission computerized tomography (SPECT) is a nuclear medicine technique that images spatial distributions of radioisotopes and plays a pivotal role in clinical diagnosis, and in estimating radiation-absorbed doses in nuclear medicine therapies [1, 2]. For example, quantitative SPECT imaging with Lutetium-177 (\({}^{177}\)Lu) in targeted radionuclide therapy (such as \({}^{177}\)Lu DOTATATE) is important in determining dose-response relationships in tumors and holds great potential for dosimetry-based individualized treatment. Additionally, quantitative Ytrium-90 (\({}^{90}\)Y) bremsstrahlung SPECT imaging is valuable for safety assessment and absorbed dose verification after \({}^{90}\)Y radioembolization in liver malignancies. However, SPECT imaging suffers from noise and limited spatial resolution due to the collimator response; the resulting reconstruction problem is hence ill-posed and challenging to solve.
Numerous reconstruction algorithms have been proposed for SPECT reconstruction, of which the most popular ones are model-based image reconstruction algorithms such as maximum likelihood expectation maximization (MLEM) [3] and ordered-subset EM (OSEM) [4]. These methods first construct a mathematical model for the SPECT imaging system, then maximize the (log-)likelihood for a Poisson noise model. Although MLEM and OSEM have achieved great success in clinical use, they have a trade-off between recovery and noise. To address that trade-off, researchers have proposed alternatives such as regularization-based (or maximum a posteriori in Bayesian setting) reconstruction methods [5, 6, 7]. For example, Panin _et al._[5] proposed total variation (TV) regularization for SPECT reconstruction. However, TV regularization may lead to "blocky" images and over-smoothing the edges. One way to overcome blurring edges is to incorporate anatomical boundary side information from CT images [8], but that method requires accurate organ segmentation. Chun _et al._[9] used non-local means (NLM) filters that exploit the self-similarity of patches in images for regularization, yet that method is computationally expensive and hence less practical. In general, choosing an appropriate regularizer can be challenging; moreover, these traditional regularized algorithms may lack generalizability to images that do not follow assumptions made by the prior.
With the recent success of deep learning (DL) and especially convolutional neural networks (CNN), DL methods have been reported to outperform conventional algorithms in many medical imaging applications such as in MRI [10, 11, 12], CT [13, 14] and PET reconstruction [15, 16, 17]. However, fewer DL approaches to SPECT reconstruction appear in the literature. Reference [18] proposed "SPECTnet" with a two-step training strategy that learns the transformation from projection space to image space as an alternative to the traditional OSEM algorithm. Reference [19] also proposed a DL method that
can directly reconstruct the activity image from the SPECT projection data, even with reduced view angles. Reference [20] trained a neural network that maps non-attenuation-corrected SPECT images to those corrected by CT images as a post-processing procedure to enhance the reconstructed image quality.
Though promising results were reported with these methods, most of them worked in 2D whereas 3D is used in practice [18, 19]. Furthermore, there has yet to be an investigation of end-to-end training of CNN regularizers that are embedded in unrolled SPECT iterative statistical algorithms such as CNN-regularized EM. End-to-end training is popular in machine learning and other medical imaging fields such as MRI image reconstruction [21], and is reported to meet data-driven regularization for inverse problems [22]. But for SPECT image reconstruction, end-to-end training is nontrivial to implement due to its complicated system matrix. Alternative training methods have been proposed, such as sequential training [23, 24, 25, 26] and gradient truncation [27]; these methods were shown to be effective, though they could yield sub-optimal reconstruction results due to approximations to the training loss gradient. Another approach is to construct a neural network that also models the SPECT system matrix, like in "SPECTnet" [18], but this approach lacks interpretability compared to algorithms like unrolled CNN-regularized EM, i.e., if one sets the regularization parameter to zero, then the latter becomes identical to the traditional EM.
As an end-to-end training approach has not yet been investigated for SPECT image reconstruction, this paper first describes a SPECT forward-backward projector written in the open-source and high performance Julia language that enables efficient auto-differentiation. Then we compare the end-to-end training approach with other non-end-to-end training methods.
The structure of this article is as follows. Section II describes the implementation of our Julia projector and discusses end-to-end training and other training methods for the unrolled EM algorithm. Section III compares the accuracy, speed and memory use of our Julia projector with Monte Carlo (MC) and a Matlab-based projector, and then compares reconstructed images with end-to-end training versus sequential training and gradient truncation on different datasets (XCAT and VP phantoms), using qualitative and quantitative evaluation metrics. Section IV and V conclude this paper and discuss future works.
_Notation:_ Bold upper/lower case letters (e.g., \(\mathbf{A}\), \(\mathbf{x}\), \(\mathbf{y}\), \(\mathbf{b}\)) denote matrices and column vectors, respectively. Italics (e.g., \(\mu,y,b\)) denote scalars. \(y_{i}\) and \(b_{i}\) denote the \(i\)th element in vector \(\mathbf{y}\) and \(\mathbf{b}\), respectively. \(\mathbb{R}^{N}\) and \(\mathbb{C}^{N}\) denote \(N\)-dimensional real/complex normed vector space, respectively. \((\cdot)^{*}\) denotes the complex conjugate and \((\cdot)^{\prime}\) denotes Hermitian transpose.
## II Methods
This section summarizes the Julia SPECT projector, a DL-based image reconstruction method as well as the dataset used in experiments and other experiment setups.
### _Implementation of Julia SPECT projector_
Our Julia implementation of SPECT projector is based on [28], modeling parallel-beam collimator geometries. Our projector also accounts for attenuation and depth-dependent collimator response. We did not model the scattering events like Compton scatter and coherent scatter of high energy gamma rays within the object. Fig. 1 illustrates the SPECT imaging system modeled in this paper.
For the forward projector, at each rotation angle, we first rotate the 3D image matrix \(\mathbf{x}\in\mathbb{R}^{n_{x}\times n_{y}\times n_{z}}\) according to the third dimension by its projection angle \(\theta_{l}\) (typically \(2\pi(l-1)/n_{\text{view}}\)); \(l\) denotes the view index, which ranges from 1 to \(n_{\text{view}}\) and \(n_{\text{view}}\) denotes the total number of projection views. We implemented and compared (results shown in Section III) both bilinear interpolation and 3-pass 1D linear interpolation [29] with zero padding boundary condition for image rotation. For attenuation correction, we first rotated the 3D attenuation map \(\mathbf{\mu}\in\mathbb{R}^{n_{x}\times n_{y}\times n_{z}}\) (obtained from transmission tomography) also by \(\theta_{l}\), yielding a rotated 3D array \(\tilde{\mu}(i,j,k;l)\), where \(i,j,k\) denotes the 3D voxel coordinate. Assuming \(n_{y}\) is the index corresponding to the closest plane of \(\mathbf{x}\) to the detector, then we model the accumulated attenuation factor \(\bar{\mu}\) for each view angle as
\[\bar{\mu}(i,j,k;l)=\mathrm{e}^{-\Delta_{y}\left(\frac{1}{2}\bar{\mu}(i,j,k;l) +\sum_{x=j+1}^{n_{y}}\bar{\mu}(i,s,k;l)\right)}, \tag{1}\]
where \(\Delta_{y}\) denotes the voxel size for the (first and) second coordinate. Next, for each \(y\) slice (an \((x,z)\) plane for a given \(j\) index) of the rotated and attenuated image, we convolve with the appropriate slice of the depth-dependent point spread function \(\mathbf{p}\in\mathbb{R}^{p_{k}\times p_{k}\times n_{y}\times n_{\text{view}}}\) using a 2D fast Fourier transform (FFT). Here we use replicate padding for both the
Fig. 1: SPECT imaging model for parallel-beam collimators, with attenuation and depth-dependent collimator point spread response.
\(i\) and \(k\) coordinates. The view-dependent PSF accommodates non-circular orbits. Finally, the forward projection operation simply sums the rotated, blurred and attenuated activity image \(\mathbf{x}\) along the second coordinate \(j\). Algorithm 1 summarizes the forward projector, where denotes a 2D convolution operation.
```
Input: 3D image \(\mathbf{x}\in\mathbb{R}^{n_{x}\times n_{y}\times n_{x}}\), 3D attenuation map \(\mathbf{\mu}\in\mathbb{R}^{n_{x}\times n_{y}\times n_{x}}\), 4D point spread function \(\mathbf{p}\in\mathbb{R}^{p_{x}\times p_{x}\times n_{y}\times n_{\rm view}}\), voxel size \(\Delta_{y}\). Initialize:\(\mathbf{v}\in\mathbb{R}^{n_{x}\times n_{x}\times n_{\rm view}}\) as all zeros. for\(l=1,...,n_{\rm view}\)do \(\tilde{\mathbf{x}}\leftarrow\) rotate \(\mathbf{x}\) by \(\theta_{l}\) \(\tilde{\mathbf{\mu}}\leftarrow\) rotate \(\mathbf{\mu}\) by \(\theta_{l}\) for\(j=1,...,n_{y}\)do \(\tilde{\mathbf{\mu}}\leftarrow\) calculate by (1) using \(\tilde{\mathbf{\mu}}\) \(\tilde{\mathbf{x}}(i,j,k)\leftarrow\tilde{\mu}(i,j,k)\) \(v(i,k,l)\leftarrow\tilde{x}(i,j,k)\)\(\triangleright\)\(p(i,k;j,l)\) end for end Output: projection views \(\mathbf{v}\in\mathbb{R}^{n_{x}\times n_{x}\times n_{\rm view}}\)
```
**Algorithm 1**SPECT forward projector
All of these steps are linear, so hereafter, we use \(\mathbf{A}\) to denote the forward projector, though it is not stored explicitly as a matrix. As each step is linear, each step has an adjoint operation. So the backward projector \(\mathbf{A}^{\prime}\) is the adjoint of \(\mathbf{A}\) that satisfies
\[\langle\mathbf{A}\mathbf{x},\mathbf{y}\rangle=\langle\mathbf{x},\mathbf{A}^{\prime}\mathbf{y}\rangle, \quad\forall\mathbf{x},\mathbf{y}. \tag{2}\]
The exact adjoint of (discrete) image rotation is not simply a discrete rotation of the image by \(-\theta_{l}\). Instead, one should also consider the adjoint of linear interpolation. For the adjoint of convolution, we assume the point spread function is symmetric along coordinates \(i\) and \(k\) so that the adjoint convolution operator is just the forward convolution operator along with the adjoint of replicate padding. Algorithm 2 summarizes the SPECT backward projector.
```
Input: Array of 2D projection views \(\mathbf{v}\in\mathbb{R}^{n_{x}\times n_{x}\times n_{\rm view}}\), 3D attenuation map \(\mathbf{\mu}\in\mathbb{R}^{n_{x}\times n_{y}\times n_{x}}\), 4D point spread function \(\mathbf{p}\in\mathbb{R}^{p_{x}\times p_{x}\times n_{y}\times n_{\rm view}}\), voxel size \(\Delta_{y}\). Initialize:\(\mathbf{x}\in\mathbb{R}^{n_{x}\times n_{y}\times n_{x}}\) as all zeros. for\(l=1,...,n_{\rm view}\)do \(\tilde{\mathbf{\mu}}\leftarrow\) rotation \(\mathbf{\mu}\) by \(\theta_{l}\) for\(j=1,...,n_{y}\)do \(\tilde{\mathbf{\mu}}\leftarrow\) calculate by (1) using \(\tilde{\mathbf{\mu}}\) \(\tilde{\mathbf{v}}(i,k,l)\leftarrow\) adjoint of \(v(i,k,l)\otimes p(i,k,j,l)\) \(\tilde{x}(i,j,k)\leftarrow\tilde{v}(i,k,l)\cdot\tilde{\mu}(i,j,k;l)\) end for\(\mathbf{x}+=\) adjoint rotate \(\tilde{\mathbf{x}}\) by \(\theta_{l}\) end for Output:\(\mathbf{x}\in\mathbb{R}^{n_{x}\times n_{y}\times n_{x}}\)
```
**Algorithm 2**SPECT backward projector
To accelerate the for-loop process, we used multi-threading to enable projecting or backprojecting multiple angles at the same time. To reduce memory use, we pre-allocated necessary arrays and used fully in-place operations inside the for-loop in forward and backward projection. To further accelerate auto-differentiation, we customized the chain rule to use the linear operator \(\mathbf{A}\) or \(\mathbf{A}^{\prime}\) as the Jacobian when calling \(\mathbf{A}\mathbf{x}\) or \(\mathbf{A}^{\prime}\mathbf{y}\) during backpropagation. We implemented and tested our projector in Julia v1.6 ; we also implemented a GPU version in Julia (using CUDA.jl) that runs efficiently on a GPU by eliminating explicit scalar indexing. For completeness, we also provide a PyTorch version but without multi-threading support, in-place operations nor the exact adjoint of image rotation.
### _Unrolled CNN-regularized EM algorithm_
Model-based image reconstruction algorithms seek to estimate image \(\mathbf{x}\in\mathbb{R}^{N}\) from noisy measurements \(\mathbf{y}\in\mathbb{R}^{M}\) with imaging model \(\mathbf{A}\in\mathbb{R}^{M\times N}\). In SPECT reconstruction, the measurements \(\mathbf{y}\) are often modeled by
\[\mathbf{y}\sim\text{Poisson}(\mathbf{A}\mathbf{x}+\bar{\mathbf{r}}), \tag{3}\]
where \(\bar{\mathbf{r}}\in\mathbb{R}^{M}\) denotes the vector of means of background events such as scatters. Combining regularization with the Poisson negative log-likelihood yields the following optimization problem:
\[\hat{\mathbf{x}} =\operatorname*{arg\,min}_{\mathbf{x}\geq\mathbf{0}}f(\mathbf{x})+R(\mathbf{x}),\] \[f(\mathbf{x}) \triangleq\mathbf{1}^{\prime}(\mathbf{A}\mathbf{x}+\bar{\mathbf{r}})-\mathbf{y}^{ \prime}\log(\mathbf{A}\mathbf{x}+\bar{\mathbf{r}}), \tag{4}\]
where \(f(\mathbf{x})\) is the data fidelity term and \(R(\mathbf{x})\) denotes the regularizer. For deep learning regularizers, we follow [23] and formulate \(R(\mathbf{x})\) as
\[R(\mathbf{x})\triangleq\frac{\beta}{2}\|\mathbf{x}-\mathbf{g}_{\mathbf{\theta}}(\mathbf{x})\|_{2}^ {2}, \tag{5}\]
where \(\beta\) denotes the regularization parameter; \(\mathbf{g}_{\mathbf{\theta}}\) denotes a neural network with parameter \(\mathbf{\theta}\) that is trained to learn to enhance the image quality.
Based on (4), a natural reconstruction approach is to apply variable splitting with \(\mathbf{u}=\mathbf{g}_{\mathbf{\theta}}(\mathbf{x})\) and then alternatively update the images \(\mathbf{x}\) and \(\mathbf{u}\) as follows
\[\mathbf{u}_{k+1} =\mathbf{g}_{\mathbf{\theta}}(\mathbf{x}_{k}),\] \[\mathbf{x}_{k+1} =\operatorname*{arg\,min}_{\mathbf{x}\geq 0}f(\mathbf{x})+\frac{\beta}{2}\| \mathbf{x}-\mathbf{u}_{k+1}\|_{2}^{2}, \tag{6}\]
where subscript \(k\) denotes the iteration number. To minimize (6), we used the EM-surrogate from [30] as summarized in [23], leading to the following vector update:
\[\hat{\mathbf{x}}_{k} =\frac{1}{2\beta}\left(-\mathbf{d}(\beta)+\sqrt{\mathbf{d}(\beta)^{2}+4 \beta\mathbf{x}_{k}\odot\mathbf{e}(\mathbf{x}_{k})}\right), \tag{7}\] \[\mathbf{d}(\beta)\triangleq\mathbf{A}^{\prime}\mathbf{1}-\beta\mathbf{u}_{k}, \quad\mathbf{e}(\mathbf{x}_{k})\triangleq\mathbf{A}^{\prime}(\mathbf{y}\oslash(\mathbf{A}\mathbf{x}_{k}+ \bar{\mathbf{r}}))\,, \tag{8}\]
where \(\odot\) and \(\oslash\) denote element-wise multiplication and division, respectively. To compute \(\mathbf{x}_{k+1}\), one must substitue \(\hat{\mathbf{x}}_{k}\) back into \(\mathbf{e}(\cdot)\) in (8), and repeat. Hereafter, we refer to
(6) as one outer iteration and (7) as one inner EM iteration. Algorithm 3 summarizes the CNN-regularized EM algorithm.
```
Input: 3D projection measurements \(\mathbf{y}\), 3D background measurements \(\mathbf{\bar{r}}\), system model \(\mathbf{A}\), initial guess \(\mathbf{x_{0}}\), deep neural network \(\mathbf{g_{\theta}}\), outer iterations \(K\). for\(k=0,...,K-1\)do \(\mathbf{u_{k+1}}=\mathbf{g_{\theta}}(\mathbf{x_{k}})\) \(\mathbf{x_{k+1}}\leftarrow\) repeat (7) until convergence tolerance or maximum # of inner iterations is reached end for Output:\(\mathbf{x_{K}}\)
```
**Algorithm 3**SPECT CNN-regularized EM algorithm
To train \(\mathbf{g_{\theta}}\), the most direct way is to unroll Algorithm 3 and train end-to-end with an appropriate target; this supervised approach requires backpropagating through the SPECT system model, which is not trivial to implement with previous SPECT projection tools due to the memory issues. Non-end-to-end training methods, e.g., sequential training [23], first train \(\mathbf{u}_{k}\) by the target and then plug into (7) at each iteration. This method must use non-shared weights for the neural network per each iteration. Another method is gradient truncation [27] that ignores the gradient involving the system matrix \(\mathbf{A}\) and its adjoint \(\mathbf{A}^{\prime}\) during backpropagation. Both of these training methods, though reported to be effective, may be sub-optimal because they approximate the overall training loss gradients.
### _Phantom Dataset and Simulation Setup_
We used simulated XCAT phantoms [31] and virtual patient phantoms for experiment results presented in Section III. Each XCAT phantom was simulated to approximately follow the activity distributions observed when imaging patients after \({}^{177}\mathrm{Lu}\) DOTATATE therapy. We set the image size to \(128\times 128\times 80\) with voxel size \(4.8\times 4.8\times 4.8\mathrm{mm}^{3}\). Tumors of various shapes and sizes (5-100mL) were located in the liver as is typical for patients undergoing this therapy.
For virtual patient phantoms, we consider two radionuclides: \({}^{177}\mathrm{Lu}\) and \({}^{90}\mathrm{Y}\). For \({}^{177}\mathrm{Lu}\) phantoms, the true images were from PET/CT scans of patients who underwent diagnostic \({}^{68}\)Ga DOTATATE PET/CT imaging (Siemens Biograph mCT) to determine eligibility for \({}^{177}\mathrm{Lu}\) DOTATATE therapy. The \({}^{68}\)Ga DOTATATE distribution in patients is expected to be similar to \({}^{177}\mathrm{Lu}\) and hence can provide a reasonable approximation to the activity distribution of \({}^{177}\mathrm{Lu}\) in patients for DL training purposes but at higher resolution. The PET images had size \(200\times 200\times 577\) and voxel size \(4.073\times 4.073\times 2\) mm\({}^{3}\) and were obtained from our Siemens mCT (resolution is 5-6 mm FWHM [32]) and reconstructed using the standard clinic protocol: 3D OSEM with three iterations, 21 subsets, including resolution recovery, time-of-flight, and a 5mm (FWHM) Gaussian post-reconstruction filter. The density maps were also generated using the experimentally derived CT-to-density calibration curve.
For \({}^{90}\mathrm{Y}\) phantoms, the true activity images were reconstructed (using a previously implemented 3D OSEM reconstruction with CNN-based scatter estimation [33]) from \({}^{90}\mathrm{Y}\) SPECT/CT scans of patients who underwent \({}^{90}\mathrm{Y}\) microsphere radioembolization in our clinic.
In total, we simulated 4 XCAT phantoms, 8 \({}^{177}\mathrm{Lu}\) and 8 \({}^{90}\mathrm{Y}\) virtual patient phantoms. We repeated all of our experiments 3 times with different noise realizations. All image data have University of Michigan Institutional Review Board (IRB) approval for retrospective analysis. For all simulated phantoms, we selected the center slices covering the lung, liver and kidney corresponding to SPECT axial FOV (39cm).
Then we ran SIMIND Monte Carlo (MC) program [34] to generate the radial position of SPECT camera for 128 view angles. The SIMIND model parameters for \({}^{177}\mathrm{Lu}\) were based on \({}^{177}\mathrm{Lu}\) DOTATATE patient imaging in our clinic (Siemens Intevo with medium energy collimators, a 5/8" crystal, a 20% photopeak window at 208 keV, and two adjacent 10% scatter windows) [35]. For \({}^{90}\mathrm{Y}\), a high-energy collimator, 5/8" crystal, and a 105 to 195 keV acquisition energy window was modeled as in our clinical protocol for \({}^{90}\mathrm{Y}\) bremsstrahlung imaging. Next we approximated the point spread function for \({}^{177}\mathrm{Lu}\) and \({}^{90}\mathrm{Y}\) by simulating point source at 6 different distances (20, 50, 100, 150, 200, 250mm) and then fitting a 2D Gaussian distribution at each distance. The camera orbit was assumed to be non-circular (auto-contouring mode in clinical systems) with the minimum distance between the phantom surface and detector set at 1 cm.
## III Experiment Results
### _Comparison of projectors_
We used an XCAT phantom to evaluate the accuracy and memory-efficiency of our Julia projector.
#### Iii-A1 Accuracy
We first compared primary (no scatter events included) projection images and profiles generated by our Julia projector with those from MC simulation and the Matlab projector. For results of MC, we ran two SIMIND simulations for 1 billion histories using \({}^{177}\mathrm{Lu}\) and \({}^{90}\mathrm{Y}\) as radionuclide source, respectively. Each simulation took about 10 hours using a 3.2 GHz 16-Core Intel Xeon W CPU on MacOS. The Matlab projector was originally implemented and compiled in C99 and then wrapped by a Matlab MEX file as a part of the Michigan Image Reconstruction Toolbox (MIRT) [36]. The physics modeling of the Matlab projector was the same as our Julia projector except that it only implemented 3-pass 1D linear interpolation for image rotation. Unlike the memory-efficient Julia version, the Matlab version pre-rotates the patient attenuation map for all projection views. This strategy saves time during EM iterations for a single patient,
but uses considerable memory and scales poorly for DL training approaches involving multiple patient datasets.
Fig. 2 compared the primary projections generated by different methods without adding Poisson noise. Visualizations of image slices and line profiles illustrate that our Julia projector (with rotation based on 3-pass 1D interpolation) is almost identical to the Matlab projector, while both give a reasonably good approximation to the MC. Using MC as reference, the NRMSE of Julia1D/Matlab/Julia2D projectors were 7.9%/7.9%/7.6% for \({}^{177}\mathrm{Lu}\), respectively; while the NRMSE were 8.2%/8.2%/7.9% for \({}^{90}\mathrm{Y}\). We also compared the OSEM reconstructed images using Julia (2D) and Matlab projectors, where we did not observe notable difference, as shown in Fig. 3. The overall NRMSD between Matlab and Julia (2D) projector for the whole 3D OSEM reconstructed image ranged from 2.5% to 2.8% across 3 noise realizations.
#### Iii-A2 Speed and memory use
We compared the memory use and compute times between our Julia projector (with 2D bilinear interpolation) and the Matlab projector using different number of threads when projecting a \(128\times 128\times 80\) image. Fig. 4 shows that our Julia projector has comparable computing time for a single projection with 128 view angles using different number of CPU threads, while using only a very small fraction of memory (\(\sim\)5%) and pre-allocation time (\(\sim\)1%) compared to the Matlab projector.
#### Iii-A3 Adjoint of projector
We generated a set of random numbers to verify that the backprojector is an exact adjoint of the forward projector. Specifically, we generated the system matrix of size \((8\times 6\times 7)\times(8\times 8\times 6)\) using random (nonnegative) attenuation maps and random (symmetric) PSF. Fig. 5 compares the transpose of the forward projector to the backprojector. As shown in Fig. 5 (d), the Frobenius norm error of our backprojector agrees well with the regular transpose within an accuracy of \(10^{-6}\) across 100 different realizations, as expected for 32-bit floating point calculations. A more comprehensive comparison is available in the code tests at [https://github.com/JuliaImageRecon/SPECTrecon.jl](https://github.com/JuliaImageRecon/SPECTrecon.jl).
Fig. 2: Primary (scatter-free) projections generated by MC simulation, Matlab projector and our Julia projector with 3-pass 1D linear interpolation and 2D bilinear interpolation for image rotation, using \({}^{177}\)Lu and \({}^{90}\)Y radionuclides. Subfigure (i)-(l) show line profiles across tumors as shown in subfigure (a) and (e), respectively. MC projections were scaled to have the same total activities as the Matlab projector per field-of-view.
### _Comparison of CNN-regularized EM using different training methods_
This section compares end-to-end training with other training methods that have been used previously for SPECT image reconstruction, namely the gradient truncation and sequential training. The training targets were simulated activity maps on \({}^{177}\mathrm{Lu}\) XCAT phantoms and \({}^{177}\mathrm{Lu}\) & \({}^{90}\mathrm{Y}\) virtual patient phantoms. We implemented an unrolled CNN-regularized EM algorithm with 3 outer iterations, each of which had one inner iteration. Only 3 outer iterations were used (compared to previous works such as [27]) because we used the 16-iteration 4-subset OSEM reconstructed image as a warm start for all reconstruction algorithms. We set the regularization parameter (defined in (5)) as \(\beta=1\). The regularizer was a 3-layer 3D CNN, where each layer had a \(3\times 3\times 3\) convolutional filter followed by ReLU activation (except the last layer), and hence had 657 trainable parameters in total. We added the input image \(\mathbf{x}_{k}\) to the output of CNN following the common residual learning strategy [37]. End-to-end training and gradient truncation could also work with a shared weights CNN approach, but were not included here for fair comparison purpose, since the sequential training only works with non-shared weights CNN. All the neural networks were initialized with the same parameters (drawn from a Gaussian distribution) and trained on an Nvidia RTX 3090 GPU for 600 epochs by minimizing mean square error (loss) using AdamW optimizer [38] with a constant learning rate 0.002.
Besides line profiles for qualitative comparison, we also used mean activity error (MAE) and normalized root mean square error (NRMSE) as quantitative evaluation metrics, where MAE is defined as
\[\mathrm{MAE}\triangleq\left|1-\frac{\frac{1}{n_{p}}\sum_{j\in\mathrm{VOI}} \hat{\mathbf{x}}[j]}{\frac{1}{n_{p}}\sum_{j\in\mathrm{VOI}}\mathbf{x}_{\text{true}}[j] }\right|\times 100\%, \tag{9}\]
where \(n_{p}\) denotes number of voxels in the voxels of interest (VOI). \(\hat{\mathbf{x}}\) and \(\mathbf{x}_{\text{true}}\) denote the reconstructed image and the true activity map, respectively. The NRMSE is defined as
\[\mathrm{NRMSE}\triangleq\frac{\sqrt{\frac{1}{n_{p}}\sum_{j\in\mathrm{VOI}} \left(\hat{\mathbf{x}}[j]-\mathbf{x}_{\text{true}}[j]\right)^{2}}}{\sqrt{\frac{1}{n_{ p}}\left(\sum_{j\in\mathrm{VOI}}\mathbf{x}_{\text{true}}[j]\right)^{2}}}\times 100\%. \tag{10}\]
Fig. 4: Time and memory comparison between Matlab projector and our Julia projector for projecting 128 view angles of a \(128\times 128\times 80\) image. “time pre” denotes the time cost for pre-allocating necessary arrays before projection; “time proj” denotes the time cost for a single projection; “mem” denotes the memory usage. All methods were tested on MacOS with a 3.8 GHz 8-Core Intel Core i7 CPU.
Fig. 5: Accuracy of the backprojector. In subfigure (d), \(\mathbf{A}^{\prime}\) denotes regular transpose of \(\mathbf{A}\); \(\mathbf{A}_{b}\) denotes the backprojector.
Fig. 3: Comparison of one slice of the \(128\times 128\times 80\) OSEM reconstruction (16 iterations, 4 subsets) using Matlab and Julia (2D interpolation) projectors.
All activity images were scaled by a factor that normalized the whole activity to 1 MBq per field of view (FOV) before comparison. All quantitative results (Table I, Table II, Table III) were averaged across 3 different noise realizations.
#### Iv-B1 Loss function, computing time and memory use
We compared the training and validation loss using sequential training, gradient truncation and end-to-end training. We ran 1800 epochs for each method on \({}^{177}\mathrm{Lu}\) XCAT phantoms with the AdamW optimizer [38]. Fig. 6 shows that the end-to-end training achieved the lowest validation loss while it had comparable training loss with the gradient truncation (which became lower at around 1400 epochs). For visualization, we concatenated the first 600 epochs of each outer iteration for the sequential training method, as shown by the spikes in sequential training curve. We ran 600 epochs for each algorithm for subsequent experiments because the validation losses were pretty much settled at around 600 epochs.
We also compared the computing time of each training method. We found that for MLEM with 3 outer iterations and 1 inner iteration, where each outer iteration had a 3-layer convolutional neural network, sequential training took 48.6 seconds to complete a training epoch; while gradient truncation took 327.1 seconds and end-to-end training took 336.3 seconds. Under the same experiment settings, we found sequential training took less than 1GB of memory to backpropagate through one outer iteration; compared to approximately 6GB used in gradient truncation and end-to-end training that backpropagated through three outer iterations.
#### Iv-B2 Results on \({}^{177}\mathrm{Lu}\) XCAT phantoms
We evaluated the CNN-regularized EM algorithm with three training methods on 4 \({}^{177}\mathrm{Lu}\) XCAT phantoms we simulated. We generated the primary projections by calling forward operation of our Julia projector and then added uniform scatters with 10% of the primary counts before adding Poisson noise. Of the 4 phantoms, we used 2 for training, 1 for validation and 1 for testing.
Fig. 7 shows that the end-to-end training yielded incrementally better reconstruction of the tumor in the liver center over OSEM, sequential training and gradient truncation. Fig. 7 (g) also illustrates this improvement by the line profile across the tumor. For the tumor at the top-right corner of the liver, all methods had comparable performance; this can be attributed to the small tumor size (5mL) for which partial volume (PV) effects associated with SPECT resolution are higher; and hence its recovery is even more challenging.
Table I demonstrates that the CNN-regularized EM algorithm with all training methods (sequential training, gradient truncation and end-to-end training) consistently had lower reconstruction error than the OSEM method. Among all training methods, the proposed end-to-end training had lower MAE over nearly all lesions and organs than other training methods. The relative reduction in MAE by the end-to-end training was up to 32% (for lesion 3) compared to sequential training. End-to-end training also had lower NRMSE for most lesions and organs, and was otherwise comparable to other training methods. The relative improvement compared to sequential training was up to 29% (for lesion 3).
#### Iv-B3 Results on \({}^{177}\mathrm{Lu}\) VP phantoms
Next we present test results on 8 \({}^{177}\mathrm{Lu}\) virtual patient phantoms. Out of 8 \({}^{177}\mathrm{Lu}\) phantoms, we used 4 for training, 1 for validation and 3 for testing.
Fig. 8 shows that the improvement of all learning-based methods was limited compared to OSEM, which was also evident from line profiles. For example, in Fig. 8 (g), where the line profile was drawn on a small tumor. We found that OSEM yielded a fairly accurate estimate already, and
Fig. 6: Training and validation loss of three backpropagation methods.
Fig. 7: Qualitative comparison of different training methods and OSEM tested on \({}^{177}\mathrm{Lu}\) XCAT phantoms. Subfigure (a)-(c): true activity map, attenuation map and OSEM reconstruction (16 iterations and 4 subsets); (d)-(f): regularized EM using sequential training, gradient truncation, end-to-end training, respectively; (g) and (h): line profiles in (a).
we did not observe as much improvement as we had seen on \({}^{177}\mathrm{Lu}\) XCAT phantoms for end-to-end training or even learning-based methods. Table II also demonstrates this observation. The OSEM method had substantially lower MAE and NRMSE compared to the errors shown for \({}^{177}\mathrm{Lu}\) XCAT data (cf Table I). Moreover, the end-to-end training method had comparable accuracy with gradient truncation. For example, gradient truncation was the best on lesion, liver and lung in terms of MAE; end-to-end training had the lowest NRMSE on lesion, liver, lung, kidney and spleen. Perhaps this could be due to the loss function used for training, i.e., MSE loss was used in our experiments so that end-to-end training might yield lower NRMSE. A more comprehensive study would be needed to verify this conjecture.
#### Iv-B4 Results on \({}^{90}\mathrm{Y}\) VP phantoms
We also tested with 8 \({}^{90}\mathrm{Y}\) virtual patient phantoms. Of the 8 phantoms, we used 4 for training, 1 for validation and 3 for testing.
Fig. 9 compares the reconstruction quality between OSEM and CNN-regularized EM algorithm using sequential training, gradient truncation and end-to-end training. Visually, the end-to-end training reconstruction yields the closest estimate to the true activity. This is also evident through the line profiles (subfigure (m) and (n)) across the tumor and the liver.
Table III reports the mean activity error (MAE) and NRMSE for lesions and organs across all testing phantoms. Similar to the qualitative assessment (Fig. 9), the end-to-end training also produced lower errors consistently across all testing lesions and organs. For instance, compared to sequential training/gradient truncation, the end-to-end training relatively reduced MAE on average by 8.7%/7.2%, 18.5%/11.0% and 24.7%/16.1% for lesion, healthy liver and lung, respectively. The NRMSE was also relatively reduced by 6.1%/3.8%, 7.2%/4.1% and 6.1%/3.0% for lesion, healthy liver and lung, respectively. All learning-based methods consistently had lower errors than the OSEM method.
### _Results at intermediate iterations_
One potential problem associated with end-to-end training (and gradient truncation) is that the results at intermediate iterations could be unfavorable, because they are not directly trained by the targets [39]. Here, we examined the images at intermediate iterations and did not observe such problems as illustrated in Fig. 10, where images at each iteration gave a
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{5}{|c|}{MAE(\%)} \\ \hline Lesion/Organ & OSEM & Sequential & Truncation & End2end \\ \hline Lesion (6-152mL) & 11.1 \(\pm\) 2.5 & 9.4 \(\pm\) 3.2 & **6.7**\(\pm\) 2.4 & 7.3 \(\pm\) 2.8 \\ \hline Liver & 4.8 \(\pm\) 0.1 & 4.5 \(\pm\) 0.2 & **3.4**\(\pm\) 0.6 & 4.0 \(\pm\) 0.2 \\ \hline Healthy liver & 4.1 \(\pm\) 0.1 & 4.1 \(\pm\) 0.1 & **3.5**\(\pm\) 0.6 & 4.1 \(\pm\) 0.2 \\ \hline Lung & 3.4 \(\pm\) 0.1 & 3.0 \(\pm\) 0.2 & **2.4**\(\pm\) 0.7 & 3.0 \(\pm\) 0.5 \\ \hline Kidney & 5.2 \(\pm\) 0.3 & 4.3 \(\pm\) 0.1 & 2.6 \(\pm\) 0.1 & **2.3**\(\pm\) 0.2 \\ \hline Spleen & 0.8 \(\pm\) 0.2 & **0.6**\(\pm\) 0.1 & 1.3 \(\pm\) 0.6 & 1.2 \(\pm\) 0.4 \\ \hline \multicolumn{5}{|c|}{NRMSE(\%)} \\ \hline Lesion/Organ & OSEM & Sequential & Truncation & End2end \\ \hline Lesion (6-152mL) & 16.1 \(\pm\) 2.2 & 14.9 \(\pm\) 2.4 & 14.3 \(\pm\) 1.7 & **14.2**\(\pm\) 2.1 \\ \hline Liver & 15.9 \(\pm\) 0.2 & **15.3**\(\pm\) 0.1 & 15.5 \(\pm\) 0.6 & **15.3**\(\pm\) 0.1 \\ \hline Healthy liver & 16.8 \(\pm\) 0.1 & **16.6**\(\pm\) 0.1 & 17.3 \(\pm\) 0.5 & 17.1 \(\pm\) 0.3 \\ \hline Lung & 22.3 \(\pm\) 0.3 & 22.1 \(\pm\) 0.4 & 22.0 \(\pm\) 0.4 & **21.9**\(\pm\) 0.5 \\ \hline Kidney & 17.4 \(\pm\) 0.1 & 16.8 \(\pm\) 0.1 & 16.4 \(\pm\) 0.3 & **16.3**\(\pm\) 0.5 \\ \hline Spleen & 13.5 \(\pm\) 0.2 & 12.4 \(\pm\) 0.3 & **12.3**\(\pm\) 0.7 & **12.3**\(\pm\) 0.5 \\ \hline \end{tabular}
\end{table} TABLE II: The average(\(\pm\)standard deviation) MAE(%) and NRMSE(%) across 3 noise realizations of \({}^{177}\mathrm{Lu}\) VP phantoms.
Fig. 8: Qualitative comparison of different training methods and OSEM tested on \({}^{177}\mathrm{Lu}\) VP phantoms. Subfigure (g) and (h) correspond to line profiles marked in (a).
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{5}{|c|}{MAE(\%)} \\ \hline Lesion/Organ & OSEM & Sequential & Truncation & End2end \\ \hline Lesion (1 67mL) & 12.5 \(\pm\) 0.6 & 6.7 \(\pm\) 1.8 & 2.8 \(\pm\) 0.9 & **2.1**\(\pm\) 1.1 \\ \hline Lesion 2 (10mL) & 20.2 \(\pm\) 0.9 & 11.5 \(\pm\) 4.1 & 10.8 \(\pm\) 0.9 & **9.7**\(\pm\) 1.1 \\ \hline Lesion 3 (9mL) & 25.6 \(\pm\) 0.6 & 18.8 \(\pm\) 0.4 & 15.2 \(\pm\) 0.9 & **12.8**\(\pm\) 1.0 \\ \hline Lesion 4 (5mL) & 43.0 \(\pm\) 0.6 & 40.0 \(\pm\) 1.2 & 38.8 \(\pm\) 0.8 & **38.7**\(\pm\) 0.7 \\ \hline Liver & 6.4 \(\pm\) 0.7 & 6.2 \(\pm\) 1.5 & 4.6 \(\pm\) 1.1 & **3.7**\(\pm\) 1.2 \\ \hline Lung & 24.4 \(\pm\) 0.7 & 2.2 \(\pm\) 0.4 & **0.7**\(\pm\) 0.6 & 0.9 \(\pm\) 0.5 \\ \hline Spleen & 14.2 \(\pm\) 0.9 & 12.6 \(\pm\) 2.4 & **14.8**\(\pm\) 0.7 & 9.3 \(\pm\) 1.5 \\ \hline Kidney & 15.9 \(\pm\) 1.0 & 15.1 \(\pm\) 1.2 & 14.4 \(\pm\) 1.4 & **13.6**\(\pm\) 1.6 \\ \hline \multicolumn{5}{|c|}{NRMSE(\%)} \\ \hline Lesion/Organ & OSEM & OSEM & Sequential & Truncation & End2end \\ \hline Lesion 1 (67mL) & 27.3 \(\pm\) 0.3 & 21.7 \(\pm\) 1.3 & 18.9 \(\pm\) 0.6 & **18.3**\(\pm\) 0.6 \\ \hline Lesion 2 (10mL) & 26.8 \(\pm\) 0.6 & 19.2 \(\pm\) 2.2 & 16.4 \(\pm\) 0.4 & **16.3**\(\pm\) 0.8 \\ \hline Lesion 3 (9mL) & 28.4 \(\pm\) 0.4 & 22.8 \(\pm\) 0.8 & 18.3 \(\pm\) 0.7 & **16.3**\(\pm\) 0.7 \\ \hline Lesion 4 (5mL) & 43.5 \(\pm\) 0.5 & 41.1 \(\pm\) 1.3 & **40.0**\(\pm\) 0.7 & 40.2 \(\pm\) 0.6 \\ \hline Liver & 28.5 \(\pm\) 0.1 & 25.0 \(\pm\) 0.8 & **24.3**\(\pm\) 0.3 & 24.5 \(\pm\) 0.3 \\ \hline Lung & 32.1 \(\pm\) 0.1 & 31.2 \(\pm\) 1.1 & **29.5**\(\pm\) 0.3 & 30.4 \(\pm\) 0.4 \\ \hline Spleen & 25.7 \(\pm\) 0.3 & 22.8 \(\pm\) 1.1 & 20.4 \(\pm\) 0.4 & **19.9**\(\pm\) 0.6 \\ \hline Kidney & 40.8 \(\pm\) 0.3 & 39.7 \(\pm\) 0.4 & 39.7 \(\pm\) 0.2 & **39.2**\(\pm\) 0.3 \\ \hline \end{tabular}
\end{table} TABLE I: The average(\(\pm\)standard deviation) MAE(%) and NRMSE(%) across 3 noise realizations of \({}^{177}\mathrm{Lu}\) XCAT phantoms.
fairly accurate estimate to the true activity. Perhaps under the shallow-network setting (e.g., 3 layers used here, with only 3 outer iterations), the network for each iteration was less likely to overfit the training data. Another reason could be due to the non-shared weights setting so that the network could learn suitable weights for each iteration.
## IV Discussion
Training end-to-end CNN-based iterative algorithms for SPECT image reconstruction requires memory efficient forward-backward projectors so that backpropagation can be less computationally expensive. This work implemented a new SPECT projector using Julia that is an open-source, high performance and cross-platform language. With comparisons between Monte Carlo (MC) and a Matlab-based projector, we verified the accuracy, speed and memory-efficiency of our Julia projector. These favorable properties support efficient backpropagation when training end-to-end unrolled iterative reconstruction algorithms. Most modern DL algorithms process multiple data batches in parallel, so memory efficiency is of great importance for efficient training and testing neural networks. To that extent, our Julia projector is much more suitable than the Matlab-based projector.
Fig. 10: Visualization of intermediate iteration results of different training methods. Subfigure (d)-(f): sequential training; (g)-(i): gradient truncation; (j)-(l): end-to-end training.
Fig. 9: Qualitative comparison of different training methods and OSEM tested on \({}^{90}\)Y VP phantoms. Subfigure (a)-(f) and (g)-(l) show two slices from two testing phantoms. Subfigure (m) and (n) correspond to line profiles in (a) and (g), respectively.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{MAE(\%)} \\ \hline Lesion/Organ & OSEM & Sequential & Truncation & End2end \\ \hline Lesion (3-356mL) & 32.5 \(\pm\) 1.3 & 25.3 \(\pm\) 1.3 & 24.9 \(\pm\) 1.0 & **23.1**\(\pm\) 1.8 \\ \hline Liver & 25.0 \(\pm\) 0.1 & 18.7 \(\pm\) 0.1 & 17.8 \(\pm\) 1.3 & **15.6**\(\pm\) 3.6 \\ \hline Healthy liver & 25.1 \(\pm\) 0.2 & 23.8 \(\pm\) 0.5 & 21.8 \(\pm\) 1.2 & **19.4**\(\pm\) 3.1 \\ \hline Lung & 88.4 \(\pm\) 2.1 & 64.9 \(\pm\) 1.6 & 58.3 \(\pm\) 6.6 & **48.9**\(\pm\) 8.4 \\ \hline \multicolumn{6}{|c|}{NRMSE(\%)} \\ \hline Lesion/Organ & OSEM & Sequential & Truncation & End2end \\ \hline Lesion (3-356mL) & 35.3 \(\pm\) 1.5 & 29.6 \(\pm\) 1.4 & 28.9 \(\pm\) 1.1 & **27.8**\(\pm\) 1.2 \\ \hline Liver & 29.9 \(\pm\) 0.4 & 22.7 \(\pm\) 0.1 & 22.1 \(\pm\) 0.9 & **21.2**\(\pm\) 1.5 \\ \hline Healthy liver & 31.6 \(\pm\) 0.4 & 27.9 \(\pm\) 0.3 & 27.0 \(\pm\) 0.9 & **25.9**\(\pm\) 2.0 \\ \hline Lung & 62.4 \(\pm\) 1.3 & 59.2 \(\pm\) 1.1 & 57.3 \(\pm\) 3.0 & **55.6**\(\pm\) 4.6 \\ \hline \end{tabular}
\end{table} TABLE III: The average(\(\pm\)standard deviation) MAE(%) and NRMSE(%) across 3 noise realizations of \({}^{90}\)Y VP phantoms.
Fig. 10: Visualization of intermediate iteration results of different training methods. Subfigure (d)-(f): sequential training; (g)-(i): gradient truncation; (j)-(l): end-to-end training.
We used the CNN-regularized EM algorithm as an example to test end-to-end training and other training methods on different datasets including \({}^{177}\mathrm{Lu}\) XCAT phantoms, \({}^{177}\mathrm{Lu}\) and \({}^{90}\mathrm{Y}\) virtual patient phantoms. Simulation results demonstrated that end-to-end training improved reconstruction quality on these datasets. For example, end-to-end training improved the MAE of lesion/liver in \({}^{90}\mathrm{Y}\) phantoms by 8.7%/16.6% and 7.2%/12.4% compared to sequential training and gradient truncation. This improvement could be attributed to the correct gradient was used in backpropagation. Although the end-to-end training yielded the lowest reconstruction error on both \({}^{177}\mathrm{Lu}\) XCAT phantoms and \({}^{90}\mathrm{Y}\) VP phantoms, the reconstruction errors on \({}^{177}\mathrm{Lu}\) VP phantoms were comparable with the gradient truncation. This could be due to the choice of loss functions and CNN architectures in the EM algorithm, which we will explore in the future. Also we noticed that the recovery of the nonuniform activity in VP phantoms was generally higher than activity for the XCAT phantom (MAE reported in Table I and Table II) because the assigned "true" activities at the boundaries of organs did not drop sharply, and instead, were blurred out. And therefore the OSEM algorithm was fairly competitive as reported in Table II; in \({}^{90}\mathrm{Y}\) VP results, the OSEM performed worse than learning-based methods, which could be attributed to the high downscatter associated with \({}^{90}\mathrm{Y}\) SPECT due to the continuous bremsstrahlung energy spectrum. We found all learning methods did not work very well for small tumors (e.g., 5mL), potentially due to the worse PV effect. Reducing PV effects in SPECT images has been studied extensively [40, 41]. Recently, Xie _et al._[42] trained a deep neural network to learn the mapping between PV-corrected and non-corrected images. Incorporating their network into our reconstruction model using transfer learning is an interesting future direction.
Although promising results were shown in previous sections, this work has several limitations. First, we did not test numerous hyperparameters and CNN architectures, nor with a wide variety of phantoms and patients for different radionuclides therapies. Secondly, our experiments used OSEM images as warm start to the CNN-regularized EM algorithm, where the OSEM itself was initialized with a uniform image. We did not investigate using other images such as uniform images as the start of the EM algorithm. Using a uniform image to initialize the network would likely require far more network iterations which would be very expensive computationally and therefore impractical. Additionally, this paper used fixed regularization parameter (\(\beta\) in (5)) rather than declaring \(\beta\) as a trainable parameter. We compared different methods for backpropagation, which requires using the same cost function (4) for a fair comparison. If one set \(\beta\) as a trainable parameter, then different methods could learn different \(\beta\) values, leading to different cost functions. However, the investigation of trainable \(\beta\) values is an interesting future work. Another limitation is that we did not investigate more advanced parallel computing methods such as distributed computing using multiple computers to further accelerate our Julia implementation of SPECT forward-backward projector. Such acceleration is feasible using existing Julia packages if needed. The compute times reported in Fig. 4 show that the method needs a few seconds per 128 projection views using 8 threads, which is already feasible for scientific investigation.
We also found there exists a trade-off between computational cost and reconstruction accuracy for different training methods. End-to-end training yielded reconstruction results with the lowest MAE and NRMSE because the correct gradient was used during backpropagation. Sequential training yielded worse results, but it was significantly faster and more memory efficient than the end-to-end training method. It is notably faster because it splits the whole training process and trains each of neural networks separately, and its backpropagation does not involve terms associated with the MLEM algorithm, so sequential training is actually equivalent to training that neural network alone without considering MLEM. Sequential training also used much less memory because the training was performed iteration by iteration, one network by one network, and hence the memory limitation did not depend on the number of unrolled iterations in the MLEM algorithm.
## V Conclusion
This paper presents a Julia implementation of backpropagatable SPECT forward-backward projector that is accurate, fast and memory-efficient compared to Monte Carlo (MC) and a previously developed analytical Matlab-based projector. Simulation results based on \({}^{177}\mathrm{Lu}\) XCAT phantoms, \({}^{90}\mathrm{Y}\) and \({}^{177}\mathrm{Lu}\) virtual patient (VP) phantoms demonstrate that: 1) End-to-end training yielded reconstruction images with the lowest MAE and NRMSE when tested on XCAT phantoms and \({}^{90}\mathrm{Y}\) VP phantoms, compared to other training methods (such as sequential training and gradient truncation) and OSEM. 2) For \({}^{177}\mathrm{Lu}\) VP phantoms, end-to-end training method yielded better results than sequential training and OSEM; but was rather comparable with gradient truncation. We also found there exists a trade-off between computational cost and reconstruction accuracy in different training methods (e.g., end-to-end training and sequential training). These results indicate that end-to-end training, which is feasible with our developed Julia projector, is worth investigating for SPECT reconstruction.
## Acknowledgement
All authors declare that they have no known conflicts of interest in terms of competing financial interests or personal relationships that could have an influence or are relevant to the work reported in this paper.
|
2307.01968 | Muti-scale Graph Neural Network with Signed-attention for Social Bot
Detection: A Frequency Perspective | The presence of a large number of bots on social media has adverse effects.
The graph neural network (GNN) can effectively leverage the social
relationships between users and achieve excellent results in detecting bots.
Recently, more and more GNN-based methods have been proposed for bot detection.
However, the existing GNN-based bot detection methods only focus on
low-frequency information and seldom consider high-frequency information, which
limits the representation ability of the model. To address this issue, this
paper proposes a Multi-scale with Signed-attention Graph Filter for social bot
detection called MSGS. MSGS could effectively utilize both high and
low-frequency information in the social graph. Specifically, MSGS utilizes a
multi-scale structure to produce representation vectors at different scales.
These representations are then combined using a signed-attention mechanism.
Finally, multi-scale representations via MLP after polymerization to produce
the final result. We analyze the frequency response and demonstrate that MSGS
is a more flexible and expressive adaptive graph filter. MSGS can effectively
utilize high-frequency information to alleviate the over-smoothing problem of
deep GNNs. Experimental results on real-world datasets demonstrate that our
method achieves better performance compared with several state-of-the-art
social bot detection methods. | Shuhao Shi, Kai Qiao, Zhengyan Wang, Jie Yang, Baojie Song, Jian Chen, Bin Yan | 2023-07-05T00:40:19Z | http://arxiv.org/abs/2307.01968v1 | # Multi-scale Graph Neural Network with
###### Abstract
The presence of a large number of bots on social media has adverse effects. The graph neural network (GNN) can effectively leverage the social relationships between users and achieve excellent results in detecting bots. Recently, more and more GNN-based methods have been proposed for bot detection. However, the existing GNN-based bot detection methods only focus on low-frequency information and seldom consider high-frequency information, which limits the representation ability of the model. To address this issue, this paper proposes a Multi-scale with Signed-attention Graph Filter for social bot detection called MSGS. MSGS could effectively utilize both high and low-frequency information in the social graph. Specifically, MSGS utilizes a multi-scale structure to produce representation vectors at different scales. These representations are then combined using a signed-attention mechanism. Finally, multi-scale representations via MLP after polymerization to produce the final result. We analyze the frequency response and demonstrate that MSGS is a more flexible and expressive adaptive graph filter. MSGS can effectively utilize high-frequency information to alleviate the over-smoothing problem of deep GNNs. Experimental results on real-world datasets demonstrate that our method achieves better performance compared with several state-of-the-art social bot detection methods.
Graph Neural Network, Graph filter, Multi-scale structure, Signed-attention mechanism, Social bot detection.
## I Introduction
Social media have become an indispensable part of people's daily lives. However, the existence of automated accounts, also known as social bots, has brought many problems to social media. These bots have been employed to disseminate false information, manipulate elections, and deceive users, resulting in negative societal consequences [1, 2, 3]. Effectively detecting bots on social media plays an essential role in protecting user interests and ensuring stable platform operation. Therefore, the accurate detection of bots on social media platforms is becoming increasingly crucial.
Graph neural networks (GNNs) have emerged as powerful tools for processing non-Euclidean data, where entities are represented as nodes and relationships as edges in a graph. Leveraging the inherent graph structure, GNNs enable convolutions on the graph data, facilitating effective utilization of the relationships between entities. GNNs have demonstrated impressive performance in the field of social account detection. Building upon GNN-based approaches [4, 5, 6], researchers have formulated the social bot detection task as a node classification problem. Alhosseini et al. [7] were pioneers in utilizing graph convolutional neural networks (GCNs) [8] to detect bots, effectively leveraging the graph structure and relationships among Twitter accounts. Subsequent investigations have focused on exploring multiple relationships within social graphs. For instance, Feng et al. [4] introduced the Relational Graph Convolutional Network (RGCN) [9] for Twitter social bot detection, enabling the integration of multiple social relationships between accounts. Additionally, Shi et al. [5] proposed a graph learning data augmentation technique to address the challenges of class-imbalance in social bot detection.
Existing GNNs mainly apply fixed filters for the convolution operation, these models assuming that nodes tend to share common features with their neighbors (low-frequency information) [10, 11, 12]. However, this assumption may be weakened in networks containing anomalies, since anomalies tend to have different features from the neighbors (high-frequency signals) [13, 14]. As shown in Fig. 1, using low-frequency information alone is insufficient in social bot detection. In view of the shortcoming that GNN cannot effectively utilize the high-frequency information in the user network, we designed a more flexible GNN structure that can adapt to learn the low-frequency and high-frequency information.
Our proposed framework pioneers the exploration of high-frequency signals in social bot detection, harnessing the power of GNNs. We introduce a novel GNN framework called MSGS, which adeptly captures the varying significance of different frequency components for node representation learning. At the core of this framework lies a simple yet elegant trainable filter, constructed through a multi-scale architecture
Fig. 1: Left: An illustration of graph in social bot detection. Accounts have different features or common features from the neighbors indicate high-frequency and low-frequency information, respectively. Right: The performance of GCN and our proposed MSGS on the MGTAB dataset.
and symbol attention mechanism that across multiple layers. By employing multi-scale features, we train a graph filter that intelligently exploits low-frequency and high-frequency information. Our extensive experimental results demonstrate the remarkable performance enhancement of GNNs on various benchmark datasets for social bot detection achieved by our proposed framework. The main contributions of our work are as follows:
* We are the first to analyze the high-frequency information in social bot detection and highlight the shortcomings of traditional GNNs in effectively utilizing it.
* Our proposed MSGS combines multi-scale architecture and signed-attention mechanism, enabling adaptive learning of the frequency response of the graph filter, thereby effectively leveraging both low-frequency and high-frequency information in social bot detection.
* Extensive experiments on real-world social bot detection datasets establish that MSGS outperforms other leading methods, including multi-scale GNNs and spectral GNNs.
## II Preliminaries
In this section, we define some notations and used them throughout this paper. Let \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) denote the user networks graph, where \(\mathcal{V}=\{v_{1},\cdots,v_{N}\}\) is the set of vertices with \(|\mathcal{V}|=N\) and \(\mathcal{E}\) is the set of edges. The adjacency matrix is defined as \(\mathbf{A}\in\{0,1\}^{N\times N}\), and \(\mathbf{A}_{i,j}=1\) if and only if there is a edge between \(v_{i}\) and \(v_{j}\). \(\mathbf{D}\in\mathbb{R}^{N\times N}\) is the degree matrix of \(\mathbf{A}\). \(\mathbf{D}=\operatorname{diag}\left\{d_{1},d_{2},\ldots,d_{N}\right\}\) and \(d_{i}=\sum_{j}\mathbf{A}_{ij}\). Let \(\mathcal{N}_{i}\) represents the neighborhood of node \(v_{i}\). The feature matrix is represent as \(\mathbf{X}\in\mathbb{R}^{N\times M}\), where each node \(v\) is associated with a \(M\) dimensional feature vector \(\mathbf{X}_{v}\).
### _Graph Fourier Transform_
**Theorem 1** (Convolution theorem) The Fourier transform of the convolution of functions is the product of the Fourier transforms of functions. For functions \(f\) and \(g\), \(\mathcal{F}\{\cdot\}\) and \(\mathcal{F}^{-1}\{\cdot\}\) represent Fourier transform and Inverse Fourier transform respectively, then \(f*g=\mathcal{F}^{-1}\{\mathcal{F}\{f\}\cdot\mathcal{F}\{g\}\}\). The proof of **Theorem 1** is provided in Appendix.
The graph spectral analysis relies on the spectral decomposition of graph Laplacians. Ordinary forms of Laplacian matrix is defined as \(\mathbf{L}=\mathbf{D}-\mathbf{A}\), The normalized form of Laplace matrix is defined as \(\mathbf{L}_{sym}=\mathbf{I}-\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}\). The the random walk normalized form of Laplace matrix is defined as \(\mathbf{L}_{rw}=\mathbf{D}^{-1}\mathbf{L}=\mathbf{I}-\mathbf{D}^{-1}\mathbf{A}\). In this paper, we only analyze the normalized graph Laplacian matrix \(\mathbf{L}_{sym}\). The analysis results can be easily extended to other Laplacian matrices. The purpose of defining the Laplacian operator is to find the basis for Fourier transforms. The Fourier basis on the graph is made up of the eigenvectors of the \(\mathbf{L}\), \(\mathbf{U}=[\mathbf{u}_{1}\ldots\mathbf{u}_{n}]\). The eigenvalue decomposition of the Laplace matrix can be expressed as \(\mathbf{L}=\mathbf{U}\mathbf{A}\mathbf{U}^{T}\), where \(\mathbf{\Lambda}=\operatorname{diag}\left(\left[\lambda_{1},\lambda_{2}, \cdots,\lambda_{n}\right]\right)\) is a diagonal matrix of \(\mathbf{L}\)'s eigenvalues, \(\lambda_{l}\in[0,2]\) and \(1\leq l\leq N\). Assuming \(\lambda_{1}\leq\lambda_{2}\leq\ldots\leq\lambda_{N}\), \(\lambda_{1}\) and \(\lambda_{N}\) correspond to the lowest and the highest frequency of the graph.
### _Graph Spectral Filtering_
Signal filtering is a crucial operation in signal processing. It extracts or enhances the required frequency components in the input signal and filters or attenuates some unwanted frequency components. According to **Theorem 1**, the signal is first transformed into the frequency domain, multiplied element-by-element in the frequency domain, and finally transformed back into the time domain. A graph signal \(\mathbf{x}\) with filter \(f\) of the eigenvalues can be defined as follows:
\[\mathbf{H}=f*\mathbf{x}=\mathbf{U}\left(\left(\mathbf{U}^{T}f\right)\odot \left(\mathbf{U}^{T}\mathbf{x}\right)\right), \tag{1}\]
where \(\hat{\mathbf{x}}=\mathbf{U}^{\top}\mathbf{x}\) denotes the graph Fourier transform, and \(\mathbf{x}=\mathbf{U}\hat{\mathbf{x}}\) denotes Inverse Fourier transform. \(\bigcirc\) denotes element-wise multiplication. \(\mathbf{U}^{T}f=\left[g\left(\lambda_{1}\right),g\left(\lambda_{2}\right), \ldots,g\left(\lambda_{n}\right)\right]^{T}\) is called the convolution filter in the frequency domain. Define \(g_{\theta}(\Lambda)=\operatorname{diag}\left(\left[g\left(\lambda_{1}\right),g \left(\lambda_{2}\right),\ldots,g\left(\lambda_{n}\right)\right]\right)\), and \(\theta\) is the learnable convolution kernel parameter, then:
\[\mathbf{H}=f*\mathbf{x}=\mathbf{U}g_{\theta}\mathbf{U}^{T}\mathbf{x}. \tag{2}\]
The computational complexity of graph convolution is high because of the high cost of eigenvalue decomposition for graph's Laplacian. To overcome the disadvantage of having a large convolution kernel, ChebNet approximates the parameterized frequency response function with a \(K\)-order polynomial \(g_{\theta}=\sum_{k=0}^{K}\theta_{i}\Lambda^{i}\), then:
\[\mathbf{x}*\mathbf{g}\approx\mathbf{U}\left(\sum_{i=0}^{K}\theta_{i}\mathbf{ \Lambda}^{i}\right)\mathbf{U}^{T}\mathbf{x}=\sum_{i=0}^{K}\theta_{i}\mathbf{L }_{n}^{i}\mathbf{x}. \tag{3}\]
Thomas et al. [GCN] proposed a simpler graph convolution which approximates first-order Chebyshev graph convolution. Specifically, let \(\theta_{0}=2\theta\), \(\theta_{1}=-\theta\), \(\theta_{k>1}=0\):
\[\mathbf{x}*\mathbf{g}\approx\theta\left(2\mathbf{I}-\mathbf{L}_{n}\right) \mathbf{x}=\theta(\mathbf{I}+\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}) \mathbf{x}. \tag{4}\]
**Theorem 2** (Over-smoothing) For any fixed low-pass graph filters defined over \(\mathbf{L}^{sym}\), given a graph signal \(\mathbf{x}\), suppose we convolve \(\mathbf{x}\) with the graph filter. If the number of layers in the GNN is large enough, the over-smoothing issue becomes inevitable. The proof of **Theorem 2** is provided in Appendix.
## III The Proposed Method
The use of fixed low-pass filters in GCN and other GNNs largely limits the expressive power of GNNs, thereby affecting their performance. The novelty of our method lies in the multi-scale and signed attention. Through the use of directional attention and coefficients \(\boldsymbol{\gamma}^{(0)},\boldsymbol{\gamma}^{(1)},\ldots,\boldsymbol{\gamma}^{(K)}\) of different scale channels, we learn the filtering function. MSGS works well universally by effectively utilizing low-frequency and high-frequency information through learning frequency hyperparameters to change the frequency spectrum of the graph filter.
### _Muti-scale Architecture_
**Proposition 1.** Most existing GNN models, such as GCN, employ a fixed low-pass filter. As a result, after passing through a GNN, the node representations become similar. Assume that \((v_{i},v_{j})\) is a pair of connected nodes, \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) are the node features. \(\mathcal{D}_{i,j}\) represents the distance between nodes \(v_{i}\) and \(v_{j}\). The original distance of representations is \(\mathcal{D}_{i,j}=\left\|\mathbf{x}_{i}-\mathbf{x}_{j}\right\|_{2}\). The filter used in GCN is \(\mathbf{I}+\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}\). Subject to \(d_{i}\approx d_{j}\approx d\), the distance of representations learned after neighborhood aggregation is:
\[\tilde{\mathcal{D}}_{i,j}\approx\left\|(\mathbf{x}_{i}+\frac{\mathbf{x}_{j}}{ d_{j}})-(\mathbf{x}_{j}+\frac{\mathbf{x}_{i}}{d_{i}})\right\|_{2}\approx\left\|1- \frac{1}{d}\right\|_{2}<\mathcal{D}_{i,j} \tag{5}\]
After neighborhood aggregation by GNN, the distance between node representations decreases. Although different GNN models use different \(f\) in Equ. (2), GCN and many subsequent models use a fixed low-pass filter for graph convolution, leading to similar node representations. According to **Theorem 2**, when the number of model layers is too deep, it will lead to the overs-smoothing issue in GNN. When using multiple GNN layers for learning, the task performance decline significantly. To improve the ability of GNN models to utilize the information at different frequencies, we propose a multi-scale graph learning framework. Specifically, the feature embedding of the \(l\)-th layer of the GCN model is defined as follows:
\[\mathbf{H}^{(l)}=\sigma(\mathbf{\hat{A}}\mathbf{H}^{(l-1)}\mathbf{W}^{(l)}), \tag{6}\]
where \(\mathbf{W}^{(l)}\) is a learnable parameter matrix and \(l\geq 1\), \(\mathbf{H}^{(0)}=\mathbf{X}\mathbf{W}^{(0)}\). \(\sigma(\cdot)\) is the activation function. \(\mathbf{H}^{(1)}\) represents the feature embedding obtained by passing \(l\)-layer of graph convolution. \(\mathbf{H}^{(0)},\mathbf{H}^{(1)},\ldots,\mathbf{H}^{(K)}\) are feature embeddings obtained at different scales. Let \(\tilde{\mathbf{H}}^{(l)}\) denote the feature embedding obtained after neighborhood aggregation, \(\tilde{\mathbf{H}}^{(l)}=\mathbf{\hat{A}}\mathbf{H}^{(l)}\). We retain both the embeddings before and after feature propagation:
\[\mathbf{Z}^{(l)}=(\alpha^{(l)}-\beta^{(l)})\mathbf{H}^{(l)}+\beta^{(l)} \tilde{\mathbf{H}}^{(l)}. \tag{7}\]
The calculation of \(\alpha^{(l)}\) and \(\beta^{(l)}\) is detailed in Section III-B. \(\mathbf{P}\) contains adaptive filters with \(K+1\) different scales, shown in Equ. (8). The coefficients \(\mathbf{\Gamma}^{(0)},\mathbf{\Gamma}^{(1)},\ldots,\mathbf{\Gamma}^{(K)}\) are calculated through scale-level attention mechanism, see Section III-B for details.
\[\mathbf{P}=\sum_{k=0}^{K}\mathbf{\Gamma}^{(k)}\cdot\mathbf{Z}^{(k)}. \tag{8}\]
### _Signed-attention Mechanism_
**Node-level attention mechanism** In Equ. (7), \(\alpha^{(l)}\in(0,1]\) and \(\beta^{(l)}\in(-1,1)\). \(\alpha^{(l)}-\beta^{(l)}\) controls the proportion of preserved original embedded features, while \(\beta^{(l)}\) is the coefficient of the aggregated neighborhood features.
**Proposition 2.** The graph filter \(g\): \(\mathbf{Z}^{(K)}=\left(\alpha^{(K)}-\beta^{(K)}\right)\mathbf{H}^{(K)}+\beta^ {(K)}\tilde{\mathbf{H}}^{(K)}\) is an adaptive filter that can be adjusted to a low-pass or high-pass filter depending on the changes of \(\alpha^{(K)}\) and \(\beta^{(K)}\). The filter used in \(g\) is \(\alpha^{(K)}\mathbf{I}+\beta^{(K)}\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}\), then:
\[\tilde{\mathcal{D}}_{i,j} \approx\left\|\left(\alpha^{(K)}\mathbf{x}_{i}+\frac{\beta^{(K)} \mathbf{x}_{j}}{d_{j}}\right)-\left(\alpha^{(K)}\mathbf{x}_{j}+\frac{\beta^{(K )}\mathbf{x}_{i}}{d_{i}}\right)\right\|_{2}\] \[\approx\alpha^{(K)}\left\|1-\frac{\beta^{(K)}}{d}\right\|_{2} \mathcal{D}_{i,j}(\text{ s.t. }\mathrm{du}\approx\mathrm{dv}\approx\mathrm{d})\,. \tag{9}\]
When \(\alpha^{(K)}\left\|1-\frac{\beta^{(K)}}{d}\right\|_{2}<1\), \(\tilde{\mathcal{D}}_{i,j}<\mathcal{D}_{i,j}\), \(g\) is a low-pass filter. When \(\alpha^{(K)}\left\|1-\frac{\beta^{(K)}}{d}\right\|_{2}<1\), \(\tilde{\mathcal{D}}_{i,j}<\mathcal{D}_{i,j}\), \(g\) becomes a high-pass filter. High-pass filtering makes the representations become discriminative. The proper design of \(\alpha^{(K)}\) and \(\beta^{(K)}\) requires knowing whether the information in the graph is high frequency or low frequency. However, we usually do not know the frequency distribution of the graph signal. Therefore, we propose a shared adaptive mechanism to calculate node-specific frequency coefficients \(\alpha_{i}^{(K)}\) and \(\beta_{i,j}^{(K)}\):
\[\alpha_{i}^{(K)}=\sigma(\mathbf{g}_{\alpha}^{(K)}\left[\tilde{\mathbf{h}}_{i}^{ (K)}-\mathbf{h}_{i}^{(K)}\right]), \tag{10}\]
\[\beta_{i,j}^{(K)}=\sigma((\mathbf{g}_{\beta}^{(K)})^{T}\left[\mathbf{h}_{i}^{ (K)}\|\mathbf{h}_{j}^{(K)}\right]), \tag{11}\]
where \(\mathbf{g}_{\alpha}^{(K)}\) and \(\mathbf{g}_{\beta}^{(K)}\) are shared attention vectors, the more similar \(\tilde{\mathbf{h}}_{i}^{(K)}\) and \(\mathbf{h}_{i}^{(K)}\) is, the smaller \(\alpha_{i}^{(K)}\) tend to be.
**Scale-level attention mechanism** Calculate the attention coefficients \((\mathbf{\Gamma}^{(0)},\mathbf{\Gamma}^{(1)},\ldots,\mathbf{\Gamma}^{(K)})\) of the multi-scale feature embeddings through a signed-attention mechanism:
\[(\mathbf{\Gamma}^{(0)},\mathbf{\Gamma}^{(1)},\ldots,\mathbf{\Gamma}^{(K)})= \mathrm{att}(\mathbf{Z}^{(0)},\mathbf{Z}^{(1)},\ldots,\mathbf{Z}^{(K)}) \tag{12}\]
where \(\boldsymbol{\alpha}^{(k)}\in R^{N\times 1}\) represents the attention value vector of embeddings \(\mathbf{Z}^{(K)}\) for \(N\) node, \(0\leq k\leq K\). For node \(v_{i}\), its feature embedding in the \((k+1)\)th scale is \(\mathbf{z}_{i}^{(k)}\), which represents the \(i\)-th row of \(\mathbf{Z}^{(k)}\), \((\mathbf{Z}^{(k)})^{T}=(\mathbf{z}_{1}^{(k)},\mathbf{z}_{2}^{(k)},\ldots, \mathbf{z}_{N}^{(k)})\).
Fig. 3: Architecture of our proposed MSGF.
Fig. 2: Architecture of common GNNs.
The feature embedding is nonlinearly transformed and then attention values are obtained through a shared attention vector \(\mathbf{q}\):
\[\gamma_{k,i}=\mathbf{q}^{T}\cdot\tanh(\mathbf{W}_{k}\cdot(\mathbf{z}_{i}^{(k)})^ {T}). \tag{13}\]
\(\mathbf{\Gamma}^{(k)}=[\gamma_{k,i}]\), \(0<i\leq N\). Once all the coefficients are computed, we can obtain the final embedding \(\mathbf{P}\) according to Equ. (8). Then, we use the output embedding for semi-supervised node classification with a linear transformation and a softmax function:
\[\hat{\mathbf{Y}}_{i}=\mathrm{softmax}\left(\mathbf{W}\cdot\mathbf{P}_{i}+ \mathbf{b}\right), \tag{14}\]
where \(\mathbf{W}\) and \(\mathbf{b}\) are learnable parameters, softmax is actually a normalizer across all classes. Suppose the training set is \(\mathrm{V}_{L}\), for each \(v_{n}\in\mathrm{V}_{L}\) the real label is \(\mathbf{y}_{n}\) and the predicted label is \(\mathbf{\tilde{y}}_{n}\). In this paper, we employs Cross-Entropy loss to measure the supervised loss between the real and predicted labels. The loss function is as follows:
\[\mathcal{L}=-\sum_{v_{n}\in\mathrm{V}_{L}}\mathrm{loss}\left(\mathbf{y}_{n}, \mathbf{\tilde{y}}_{n}\right). \tag{15}\]
## IV Theoretical Analysis
### _Spectral Analysis for GCN_
According to Equ. (4), the graph propagation of GCN can be formulated as follows:
\[\mathbf{H}_{GCN}=(2\mathbf{I}-\mathbf{L})^{K}\mathbf{X}, \tag{16}\]
where \(K\in\mathbb{Z}^{+}\) denotes the number of graph convolution layers. The graph filter can be formulated as \(g_{GCN}(\lambda)=(2-\lambda)^{K}\), \(\lambda\in[0,2]\). 0 indicates low frequency information and 2 indicates high frequency information. The formula of GCN neighborhood polymerization is:
\[\tilde{\mathbf{h}}_{i}^{(l)}=\mathbf{h}_{i}^{(l)}+\sum_{j\in\mathcal{N}_{i}} \frac{1}{\sqrt{d_{i}d_{j}}}\mathbf{h}_{j}^{(l)} \tag{17}\]
where \(d_{i}\) and \(d_{j}\) represent the degrees of nodes \(v_{i}\) and \(v_{j}\), respectively. The frequency responses of the first to fourth order GCN filters are shown in Fig. 4 (a)-(d). GCN amplifies low-frequency signals and restrains high-frequency signals. Essentially, the GCN filter is a fixed low-pass filter with a greater tendency to aggregate low-frequency information. As the number of GCN layers increases, the order of the filter increases, and the suppression of high-frequency information is enhanced. Therefore, deep GCN models can lead to over-smoothing.
### _Spectral Analysis for FAGCN_
In order to extract low-frequency and high-frequency information separately, FAGCN incorporates two convolution kernels \(\mathcal{F}_{L}\) and \(\mathcal{F}_{H}\) to extract low-frequency and high-frequency information respectively:
\[\mathcal{F}_{L}=\varepsilon\mathbf{I}+\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{ -1/2}=(\varepsilon+1)\mathbf{I}-\mathbf{L}, \tag{18}\]
\[\mathcal{F}_{H}=\varepsilon\mathbf{I}-\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{ -1/2}=(\varepsilon-1)\mathbf{I}+\mathbf{L}. \tag{19}\]
For a \(K\)-layer FAGCN model, its spectral filter is the combination of \(g_{FAGCN_{L}}(\lambda)\) and \(g_{FAGCN_{H}}(\lambda)\):
\[g_{FAGCN_{L}}(\lambda)=(1-\lambda+\epsilon)^{K}, \tag{20}\]
\[g_{FAGCN_{H}}(\lambda)=(\lambda-1+\epsilon)^{K}, \tag{21}\]
where \(\epsilon\in[0,1]\). \(g_{FAGCN_{L}}(\lambda)\) and \(g_{FAGCN_{H}}(\lambda)\) denote low-frequency and high-frequency filters respectively. Fig. 5 shows the frequency response of FAGCN\({}_{L}\) and FAGCN\({}_{H}\). FAGCN
Fig. 4: Relations between eigenvalues and amplitudes in filter of GCN.
Fig. 5: Relations between eigenvalues and amplitudes in low-frequency and high-frequency filter of FAGCN.
use the attention mechanism to learn the coefficients for low-frequency and high-frequency graph signals.
\[\begin{split}\tilde{\mathbf{h}}_{i}^{(l)}&=\alpha_{ij}^ {L}\left(\mathcal{F}_{L}\cdot\mathbf{H}^{(l)}\right)_{i}+\alpha_{ij}^{H}\left( \mathcal{F}_{H}\cdot\mathbf{H}^{(l)}\right)_{i}\\ &=\varepsilon\mathbf{h}_{i}^{(l)}+\sum_{j\in\mathcal{N}_{i}}\frac {\alpha_{ij}^{L}-\alpha_{ij}^{H}}{\sqrt{d_{i}d_{j}}}\mathbf{h}_{j}^{(l)}.\end{split} \tag{22}\]
Let \(\alpha_{ij}^{G}=\alpha_{ij}^{L}-\alpha_{ij}^{H}\). The coefficient \(\alpha_{ij}^{G}\) is normalized by the tanh function, which ranges from -1 to 1, FAGCN can adaptively learn low-frequency and high-frequency information. The filters in FAGCN are essentially linear combinations of \((1-\lambda+\epsilon)^{K}\) and \((\lambda-1+\epsilon)^{K}\). \(\epsilon\) is actually a translation transformation of frequency response. Due to the limited range of values for \(\epsilon\), the space for the filter to adjust is limited.
### _Spectral Analysis for RFA-GNN_
The RFA-GCN (frequency-adaptive graph convolutional network) is designed with a frequency-adaptive filter that includes a self-gating mechanism for adaptively selecting signals with different frequencies. RFA-GCN has a multi-hop relation-based frequency-adaptive architecture that considers both the graph properties of the data and high-order information between nodes. The convolution kernel of RFA-GNN is:
\[\mathcal{F}=\alpha\mathbf{I}+\beta\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/ 2}=(\alpha+\beta)\mathbf{I}-\beta\mathbf{L}. \tag{23}\]
Its graph filter can be formulated as:
\[g_{RFA-GCN}(\lambda)=(\alpha+\beta-\beta\lambda)^{K}, \tag{24}\]
where \(\alpha\in(0,1]\) and \(\beta\in(-1,1)\). For the key parameter \(\beta\) in Equ. (24), a shared adaptive mechanism was used to learn the frequency coefficient \(\{\beta_{i,j}\}_{i,j=1}^{N}\) for each node. The formula of GCN neighborhood polymerization is:
\[\tilde{\mathbf{h}}_{i}^{(l)}=\alpha\mathbf{h}_{i}^{(l)}+\sum_{j\in\mathcal{N }_{i}}\frac{\beta_{i,j}^{(l)}}{\sqrt{d_{i}d_{j}}}\mathbf{h}_{j}^{(l)}, \tag{25}\]
and the frequency response of RFA-GCN of order \(K\) can be written as:
\[(\alpha+\beta-\beta\lambda)^{K}=\beta^{K}\left(\frac{\alpha+\beta}{\beta}- \lambda\right)^{K}. \tag{26}\]
The range of \(\frac{\alpha+\beta}{\beta}\) is \((-\infty,+\infty)\). Fig. 6 shows the frequency response of RFA-GNN with different values of \(\alpha\) and \(\beta\). Although RFA-GNN extends RAGCN to more generalized cases, the frequency response of RFA-GNN is still a shifted transformation of \((-\lambda)^{K}\).
### _Spectral Analysis for MSGS_
The \(K\)-th order graph filter of MSGS can be formulated as follows:
\[\begin{split} g_{MSGF}(\lambda)&=\sum_{k=0}^{K} \gamma_{k}(\alpha^{(k)}+\beta^{(k)}-\beta^{(k)}\lambda)^{k}\\ &=\sum_{k=0}^{K}\gamma_{k}(\beta^{(k)})^{k}(\frac{\alpha^{(k)}+ \beta^{(k)}}{\beta^{(k)}}-\lambda)^{k},\end{split} \tag{27}\]
where \(\alpha_{k}\in(0,1]\), \(\beta_{k}\in(-1,1)\). The parameters and of MSGS can be adjusted to utilize different frequencies from the \(K\)-hop neighborhood.
\[\mathbf{p}_{i}=\sum_{k=0}^{K}\gamma_{k,i}\mathbf{z}_{i}^{(k)}=\sum_{k=0}^{K} \gamma_{k,i}\left[\alpha_{i}^{(k)}\mathbf{h}_{i}^{(k)}+\sum_{j\in\mathcal{N}_{ i}}\frac{\beta_{i,j}^{(k)}}{\sqrt{d_{i}d_{j}}}\mathbf{h}_{j}^{(k)}\right]. \tag{28}\]
As shown in Fig. 8, the frequency response of \(K\)-layer GCN, RAGCN, and RFA-GNN only considers the \(K\)-th power
Fig. 6: Relations between eigenvalues and spectral amplitude for RFA-GNN.
Fig. 7: Relations between eigenvalues and spectral amplitude for MSGS.
of \(\lambda\), and the frequency response of graph filters is relatively fixed. Compared to the aforementioned methods, MSGS expands the frequency response to a \(K\)-order polynomial, allowing for more flexible adaptation of low and high-frequency information.
**Proposition 3** For a single MSGS graph filter \(g\), \(C*g\) can represent any \(K\)-th order polynomial, where \(C\) is any real number. This proposition highlights that the frequency response of \(K\)-layer MSGS can represent any \(K\)-th order polynomial, which expands the space of graph filters. As a result, the model can be more flexible in preserving or filtering out low-frequency and high-frequency information.
\[C*g_{MSGF}(\lambda)=\sum_{k=0}^{K}C\gamma_{k}(\beta^{(k)})^{k}(\frac{\alpha^{( k)}+\beta^{(k)}}{\beta^{(k)}}-\lambda)^{k} \tag{29}\]
Let \(c_{1}=C\gamma_{k}(-\beta^{(k)})^{k}\), \(c_{2}=\frac{\alpha^{(k)}+\beta^{(k)}}{\beta^{(k)}}\)
\[C*g_{MSGF}(\lambda) =\sum_{k=0}^{K}C\gamma_{k}(-\beta^{(k)})^{k}(\lambda-\frac{\alpha ^{(k)}+\beta^{(k)}}{\beta^{(k)}})^{k} \tag{30}\] \[=\sum_{k=0}^{K}c_{1}(\lambda-c_{2})^{k},\]
where \(c_{1},c_{2}\in(-\infty,+\infty)\), therefore, \(C*g\) can represent any \(K\)-th order polynomial expression. The frequency response of MSGS under different parameters is shown in Fig. IV-D. Compared with the previous GNNS, MSGS has a larger variation space and can learn a more accurate frequency response. MSGS can adaptively utilize the information of the \(K\)-hop neighborhood of the target node. By learning the weights of the edges during adaptive neighborhood aggregation, positive weights are assigned to edges with low-frequency information to enhance the information through addition. In contrast, negative weights are assigned to those with high-frequency information for enhancement through subtraction. This approach strengthens the low-frequency information and enhances the high-frequency information in the graph.
## V Experiment setup
### _Dataset_
We evaluated MSGS and other bot detection models on three datasets: Cresci-15 [3], Twibot-20 [15], and MGTAB [16]. These datasets provide information on the follower and friend relationships between users. Cresci-15 is a dataset of 5,301 users labeled genuine or automated accounts. Twibot-20 is a dataset of 229,580 users and 227,979 edges, of which 11,826 accounts have been labeled genuine or automated. MGTAB is a dataset containing more than 1.5 million users and 130 million tweets. It provides information on seven types of relationships between these users and labels 10,199 accounts as either genuine or bots. We constructed user social graphs by using all labeled users and follower and friend relationships between them. For MGTAB, we used the top 20 user attribute features with the highest information gain and 768-dimensional user tweet features extracted by BERT as user features. For Twibot-20, following [4], we used 16 user attribute features, user description features, and user tweet features extracted by BERT. For Cresci-15, as described in [5], we used 6 user attribute features, 768-dimensional user description features extracted by BERT, and user tweet features. Table I provides a summary of the dataset statistics. We randomly partitioned all datasets using a 1:1:8 ratio.
### _Baseline Methods_
To verify the effectiveness of our proposed RF-GNN, we compare it with various semi-supervised learning baselines. The detail about these baselines as described as follows:
* **Node2Vec**[17] is a weighted random walk algorithm that facilitates the creation of node vectors that satisfy both homophily and structural similarity assumptions.
* **APPNP**[18] combines GCN with PageRank to better propagate information from neighboring nodes, utilizing a large, adjustable neighborhood.
* **GCN**[8] is a spectral graph convolution method that generates node embedding vectors by truncating the Chebyshev polynomial to the first-order neighborhoods.
* **SGC**[19] is a simplified version of GCN that reduces excessive complexity by iteratively removing non-linearities between GCN layers and collapsing the resulting function into a single linear transformation.
* **GAT**[20] is a semi-supervised homogeneous graph model that employs the attention mechanism to determine the weights of node neighborhoods, thereby improving the performance of graph neural networks.
* **Boosting-GNN**[21] trains a series of GNN base classifiers by serializing them, and sets higher weights for training samples that are not correctly classified by previous classifiers, thus obtaining higher classification accuracy and better reliability.
* **LA-GCN**[22] improves the expressiveness of GNN by learning the conditional distribution of neighbor features to generate features.
Fig. 8: RFA-GNN2
* **JK-Nets**[23] is a kind of GNN that employs jump knowledge to obtain a more effective structure-aware representation by flexibly utilizing the distinct neighborhood ranges of each node.
* **MSGCN**[24] adds multi-scale information to the neural network and fuses it with a self-attention mechanism and multi-scale information into the GCN design. This enhances the neural network's expression ability and alleviates the over-smoothing phenomenon of GCNs.
* **FAGCN**[10] explored, for the first time, the role of low-frequency and high-frequency signals in GNNs. They then designed a novel frequency-adaptive GCN that combines low-frequency and high-frequency signals in an adaptive manner.
* **RFA-GNN**[11] designs a frequency-adaptive filter with a self-gating mechanism that picks signals with different frequencies adaptively, without knowing the heterophily levels.
* **AdaGNN**[12] is an adaptive frequency response filter that can learn to control information flow for different feature channels. It adjusts the importance of different frequency components for each input feature channel, which creates a learnable filter when multiple layers are stacked together.
### _Parameter Settings and Hardware Configuration_
All baseline methods have been initialized using the recommended parameters from their official codes and have undergone meticulous fine-tuning. Additionally, we conducted training for 500 epochs and selected the model with the highest validation accuracy for testing. Our model was trained using the Adam optimizer for 500 epochs. We experimented with different learning rates, specifically {0.001, 0.005, 0.01}. The number of layers, K, was set to 10 for all datasets. The L2 weight decay factor of 5e-4 was applied across all datasets. The dropout rate ranged from 0 to 0.5. The model presented in this paper utilized hidden units of {16, 32, 64, 128}. We fine-tuned the remaining parameters until achieving optimal classification performance.
We implemented MSGS using PyTorch 1.8.0 and Python 3.7.10, along with PyTorch Geometric [25] for efficient sparse matrix multiplication. All experiments were executed on a server equipped with 9 Titan RTX GPUs, an Intel Xeon Silver 4210 CPU running at 2.20GHz, and 512GB of RAM. The operating system employed was Linux bcm 3.10.0.
### _Evaluation Metrics_
We employ both accuracy and F1-score to assess the overall performance of the classifier.
\[\text{Accuracy}=\frac{TP+TN}{TP+FP+FN+TN}, \tag{31}\]
\[\text{Precision}=\frac{TP}{TP+FP}, \tag{32}\]
\[\text{Recall}=\frac{TP}{TP+FN}, \tag{33}\]
\[\text{F1}=\frac{2\times\text{Precision}\times\text{Recall}}{\text{Precision}+ \text{Recall}}, \tag{34}\]
where \(TP\) is True Positive, \(TN\) is True Negative, \(FP\) is False Positive, \(FN\) is False Negative.
## VI Experiment results
In this section, we performance experiments on real world social bot detection benchmarks to evaluate MSGS. We aim to answer the following questions:
* **Q1:** How does MSGS perform compare to the state-of-the-art baselines in different scenarios? (Section VI-B).
* **Q2:** How does MSGS perform under different training set partitions? (Section VI-B).
* **Q3:** How does each individual module contributes to the performance of MSGS? (Section VI-C).
* **Q4:** Can MSGS alleviate the over-fitting phenomenon prevalent in GNNs? (Section VI-D).
* **Q5:** Can MSGS effectively use high and low-frequency information? What are the differences in using high-frequency and low-frequency information across different datasets? (Section VI-E).
* **Q6:** What are the frequency responses learned by MSGS on different datasets? (Section VI-F).
### _Evaluation on the Real-World Dataset_
In this section, we perform experimental analysis on publicly available social bot detection datasets, aimed at assessing the efficacy of our proposed method. The data was partitioned randomly into training, validation, and test sets, maintaining a ratio of 1:1:8. To ensure reliability and minimize the impact of randomness, we performed five evaluations of each method using different seeds. Our results are reported in Table II, illustrating the average performance of the baselines, as well as our proposed method, MSGS, and its various adaptations. Notably, MSGS consistently outperforms both the baselines and the alternative variants across all scenarios.
MSGS demonstrates significantly superior performance compared to GCN across all datasets. Specifically, MSGS exhibits improvements of 4.13%, 14.57%, and 1.40% on the MGTAB, Twibot-20, and Cresci-15 datasets, respectively, when compared to the baseline model GCN. Notably, detecting bots on the Cresci-15 dataset proves to be relatively facile, as most detection methods achieve over 95% accuracy. Consequently, there is limited scope for enhancement on this dataset. Furthermore, in comparison to the best results among state-of-the-art methods, our approach enhances accuracy by 1.51%, 1.94%, and 0.19% on the MGTAB, Twibot-20, and Cresci-15 datasets, respectively. These outcomes effectively demonstrate the efficacy of MSGS.
Regarding the multi-scale GNN, JK-Net incorporates skip connections between different layers, enabling the collection and aggregation of feature representations from diverse hierarchical levels to form the final feature representation. This approach retains more information compared to GCN. MSGCN, on the other hand, leverages information from multi-order neighborhoods, leading to respective improvements of
2.59%, 8.03%, and 0.91% on the MGTAB, Twibot-20, and Cresci-15 datasets compared to GCN. Notably, unlike previous multi-scale GNNs such as MixHop, etc., which are linear combinations of different order GCNs, the linear combinations of fixed \(K\) low-pass filters do not effectively exploit high-frequency information.
Recently proposed methods such as FAGCN, RFA-GNN, and AdaGNN effectively utilize high-frequency information within the graph, exhibiting superior detection performance compared to previous GNN approaches. Our proposed MSGS, however, surpasses FAGCN, RFA-GNN, and AdaGNN in detection performance by flexibly adjusting frequency responses based on different datasets, thereby achieving the best results.
### _Different Training Set Partition_
To further evaluate the performance enhancement of our approach, we conducted a comprehensive comparison between MSGS and other GNNs across various training sets. Specifically, we employed a validation set with a scale of 0.1 and a test set of 0.5. By varying the training set from 0.1 to 0.4, the results are presented in Table III. Notably, MSGS surpasses the baseline models by a significant margin across all social bot detection datasets, regardless of the training set. On the MGTAB, Twibot-20, and Cresci-15 datasets, MSGS achieves an average accuracy improvement of 5.55%, 1.24%, and 0.78% over the best-performing baseline, respectively.
### _Ablation Analysis_
In this section, we conduct a comparative analysis between MSGS and its three variants to assess the effectiveness of the designed modules. The following is a detailed description of these variations:
* **MSGS w/o MS** removes the multi-scale structure and solely utilizes the output from the final layer of the GNN model.
* **MSGS w/o SAM (N)** eliminates the node-level signed-attention mechanism, setting \(\alpha=1\) and \(\beta=0\).
* **MSGS w/o SAM (S)** excludes the scale-level signed-attention mechanism.
* **MSGS** incorporates all modules within the multi-scale graph learning framework.
The second half of Table II presents the performance of various variants, highlighting the roles of different modules within our proposed MSGS. Among all the variants, MSGS w/o SAM (N) exhibits the worst performance. This is because, without the node-level signed-attention mechanism, MSGS degenerates into a fixed low-pass filter, unable to effectively utilize high-frequency information. On the other hand, MSGS w/o MS removes the multi-scale structure, resulting in a significant decline in performance as it cannot leverage multi-scale representations. Conversely, MSGS w/o SAM (S), which excludes the scale-level signed-attention mechanism, demonstrates improved performance compared to MSGS w/o MS when able to utilize multi-scale features. MSGS w/o SAM (S), which averages the multi-scale features, is not as flexible as attention-based weighting. As a result, its performance is still inferior to MSGS.
### _Alleviating Over-Smoothing Problem_
To verify the ability of MSGS to alleviate the over-smoothing problem, we compared the performance of MSGS with GCN, FAGCN, and RFA-GCN models at different depths. We varied the number of layers in the models to {2, 4, 6, 8, 10, 16, 32, 64}, and the results are shown in Fig. 7-D. GCN achieved the best performance at two layers, but its performance gradually decreased as the number of layers increased, demonstrating that a too-deep structure can cause severe over-smoothing in GCN models. FAGCN, RFA-GCN, and our proposed MSGS all achieved significantly higher
accuracy than GCN, especially when the models had a deeper layer configuration.
GAT added an attention mechanism to the neighborhood aggregation process based on GCN, and performed better than GCN at different layer configurations. The over-smoothing problem can be slightly alleviated by the attention mechanism. FAGCN significantly outperformed GCN at different layer configurations, indicating that utilizing high-frequency information can alleviate the negative impact of over-smoothing on the model. Compared to FAGCN, the RFA-GCN model increased the range of graph filter adjustment and consistently outperformed FAGCN. Although both FAGCN and RFA-GCN can utilize high-frequency information to alleviate the over-smoothing problem, their detection accuracy slightly decreases when the model's depth is continuously increased. Our proposed MSGS, on the other hand, not only avoids over-smoothing as the number of layers increases but also improves classification performance.
### _Visualization of Edge Coefficients_
We visualize the coefficient \(\beta^{(k)}\), extracted from the last layer of MSGS to verify whether MSGS can learn different edge coefficients for different datasets. We categorize the edges in the social network graph into intra-class and inter-class based on the labels of the connected nodes. In terms of the spatial domain, low-frequency information in the graph originates from intra-class edges, while high-frequency information originates from inter-class edges.
In GCN, all edges are assigned positive weights, assuming that nodes share similar features with their normal neighbors. However, high-frequency information also plays an essential role in bot detection, and anomalous nodes may connect with normal nodes, forming inter-class edges. Aggregating the neighborhood through intra-class edges can enhance the original features of the nodes, while aggregation through inter-class edges may destroy them. Our proposed MSGS allows for adaptive learning of edge weights. As shown in Fig. VI-E, most inter-class edges have negative weights, while most intra-class edges have positive weights. This effectively utilizes high-frequency information. This allows MSGS to prioritize and leverage the important high-frequency components in the graph, enhancing its ability to capture fine-grained details and subtle patterns in the data. By incorporating this signed-attention mechanism, MSGS can effectively utilize low-frequency and high-frequency information for social bot detection.
Fig. 9: The accuracy on MGTAB (a) and Twibot-20 (b) datasets with different layers.
### _Visualization of Graph Filters_
We have generated an approximate filter for MSGS on various datasets to gain a more profound understanding of our model. Fig. VI-F illustrates that our approach can effectively learn appropriate filtering patterns from the data. In the cases of MGTAB and Twibot-20, MSGS pays attention to low-frequency and high-frequency information. However, Twibot-20 exhibits more high-frequency information than MGTAB, resulting in stronger responses for the obtained graph filters in the high-frequency domain. Conversely, for Cresci-15, MSGS primarily focuses on utilizing low-frequency information for classification. Therefore, on Cresci-15, MSGS behaves similarly to previous low-frequency filtered GNNS. This explains why MSGS did not improve significantly on the Cresci-15 dataset.
## VII Related work
### _Social Bot Detection_
Social bot detection methods can be broadly categorized into feature-based and graph-based approaches. Feature-based methods [26, 27, 3, 28] rely on feature engineering to design or extract effective detection features and then employ machine learning classifiers for classification. Early research [27, 3, 29] utilized features such as the number of followers and friends and the number of tweets for detection. Subsequent work incorporated account posting content features to improve detection effectiveness further [28, 30, 31]. However, feature-based methods fail to leverage the interaction relationships between users.
Graph neural networks have recently been applied to social bot detection with promising results. Compared to feature-based methods, graph neural networks effectively utilize user interaction features, such as follow and friend relationships [16]. Graph neural network-based account detection methods [4, 5, 7] first construct a social relationship graph and then transform the problem of detecting bot accounts into a node classification problem. Feng et al. [5] constructed a social relationship graph using friend and follower relationships, extracted tweet features, description features, and identity field features of the accounts, and then performed node classification using RGCN. OS3-GNN [4] is a graph neural network framework that addresses the issue of class imbalance in social bot detection by generating minority class nodes in the feature space, thereby alleviating the imbalance between human and bot accounts. Shi et al. [6] proposed a graph ensemble learning method that combines random forest [32] with GNN for social bot detection.
### _Graph Neural Networks_
Graph Neural Networks are neural networks designed for processing graph data. Unlike traditional methods, GNNs enable information exchange and aggregation among nodes by defining message passing on nodes and edges. Compared to traditional graph embedding methods such as DeepWalk [33] and node2vec [17], GNNs have the capability to learn richer and more advanced node representations through multi-layer stacking and information propagation mechanisms. GNNs effectively capture relationships and global structures among
Fig. 11: MSGS’s equivalent graph filters on MGTAB (a), Twibot-20 (b) and Cresci-15 (c) datasets.
Fig. 10: Visualization of the mean frequency coefficients on MGTAB (a), Twibot-20 (b) and Cresci-15 (c) datasets.
nodes in graphs, making them suitable for various domains such as social network analysis, recommendation systems, and molecular graph analysis [8].
Inspired by graph spectral theory, a learnable graph convolution operation was introduced in the Fourier domain [34]. GCN [8] simplified the convolution operation using a linear filter, becoming the most prevalent approach. GAT [20] introduced an attention mechanism to weigh the feature sum of neighboring nodes based on GCN. APPNP [18] utilizes Personalized PageRank [35], constructing a low-pass filter with distinct concentration properties compared to GCN. Several algorithms [21, 22, 23, 24] have contributed to the improvement of GCN and enhanced the performance of GNNs.
Existing spectral GNNs primarily employ fixed filters for the convolution operation, which can lead to over-smoothing issues due to the lack of learnability [12]. Recently, the spectral analysis of GNNs has garnered significant interest for its valuable insights into the interpretability and expressive power of GNNs [12, 13]. RFGCN [10] has attempted to demonstrate that most GNNs are restricted to low-pass filters and have argued for the necessity of high-pass and band-pass filters. RFA-GNN [11] further extends the adjustment scope of RFGCN [10], enabling better utilization of high-frequency information. These models enhance the expressive capacity of GNNs and enable adaptive adjustments of the frequency response of graph filters. However, their adjustment space needs to be improved. In this regard, we propose MSGS, which further expands the frequency domain adjustment space.
## VIII Conclusion
This paper introduces a novel social bot detection method called Multi-scale Graph Neural Network with Signed-Attention (MSGS). By incorporating multi-scale architecture and the signed attention mechanism, we construct an adaptive graph filter that can adjust the frequency response of the detection model based on different data, effectively utilizing both low-frequency and high-frequency information. Through the theoretical analysis from the frequency domain perspective, we have proved that MSGS expands the frequency domain adjustment space compared to existing graph filters. Moreover, MSGS addresses the over-smoothing problem commonly observed in existing GNN models. It exhibits exceptional performance, even in deep structures. Extensive experiments demonstrate that MSGS consistently outperforms state-of-the-art GNN baselines on social bot detection benchmark datasets.
## Acknowledgment
This work was supported by the National Key Research and Development Project of China (Grant No. 2020YFC1522002).
## Proof of Theorem in paper
**Proof of Theorem 1.** The Fourier transform of \(f\) can be expressed as: \(\mathcal{F}\{f\}(v)=\int_{\mathbb{R}}f(x)e^{-2\pi ix\cdot v}dx\). The inverse transformation can be expressed as: \(\mathcal{F}^{-1}\{f\}(x)=\int_{\mathbb{R}}f(v)e^{2\pi ix\cdot v}dv\). We define \(h\) to be the convolution of \(f\) and \(g\), then \(h(z)=\int_{\mathbb{R}}f(x)g(z-x)dx\). Taking the Fourier transform of \(h\), we get:
\[\begin{split}\mathcal{F}\{f*g\}(v)&\mathcal{F}\{h \}(v)\\ &=\int_{\mathbb{R}}h(z)e^{-2\pi iz\cdot v}dz\\ &=\int_{\mathbb{R}}\int_{\mathbb{R}}f(x)g(z-x)e^{-2\pi iz\cdot v }dxdz\\ &=\int_{\mathbb{R}}f(x)\left(\int_{\mathbb{R}}g(z-x)e^{-2\pi iz \cdot v}dz\right)dx.\end{split} \tag{35}\]
We substitute \(y=z-x\) and \(dy=dz\) into Equ. (35):
\[\begin{split}\mathcal{F}\{f*g\}(v)&=\int_{\mathbb{R }}f(x)\left(\int_{\mathbb{R}}g(y)e^{-2\pi i(y+x)\cdot v}dy\right)dx\\ &=\int_{\mathbb{R}}f(x)e^{-2\pi ixvv}\left(\int_{\mathbb{R}}g(y)e ^{-2\pi ivvv}dy\right)dx\\ &=\int_{\mathbb{R}}f(x)e^{-2\pi ixvv}dx\int_{\mathbb{R}}g(y)e^{-2 \pi ivvv}dy\\ &=\mathcal{F}\{f\}(v)\cdot\mathcal{F}\{g\}(v)\end{split} \tag{36}\]
Taking the inverse Fourier transform of both sides of Equ. (36), we get: \(f*g=\mathcal{F}^{-1}\{\mathcal{F}\{f\}\cdot\mathcal{F}\{g\}\}\).
**Proof of Theorem 2.** For GCN, the symmetric Laplacian matrix is:
\[\mathbf{L}_{sym}=\mathbf{I}_{N}-\mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^ {-\frac{1}{2}}=\mathbf{U}\mathbf{\Lambda}\mathbf{U}^{T}=\sum_{i=1}^{N}\lambda _{i}\mathbf{u}_{i}\mathbf{u}_{i}^{T}, \tag{37}\]
where \(\lambda_{i}\) represents the eigenvalue, \(1\leq\lambda_{i}\leq N\) and \(0=\lambda_{1}\leq\lambda_{2}\leq...\leq\lambda_{N}\).
\[\mathbf{D}^{\frac{1}{2}}\mathbf{L}_{sym}\mathbf{D}^{\frac{1}{2}}\mathbf{1}=( \mathbf{D}-\mathbf{A})\mathbf{1}=\mathbf{0}, \tag{38}\]
where \(\mathbf{1}\) is the vector with all 1 elements, and multiply both sides by the inverse of \(\mathbf{D}^{\frac{1}{2}}\) to get \(\mathbf{L}_{sym}\mathbf{D}^{\frac{1}{2}}\mathbf{1}=\mathbf{0}\).
So \(\mathbf{L}_{sym}\) has an eigenvalue of 0 and the corresponding eigenvector \(\mathbf{D}^{\frac{1}{2}}\mathbf{1}\), and the largest eigenvalue of \(\mathbf{L}_{sym}\) is the upper bound of the Rayleigh quotient:
\[\lambda_{N}=\sup_{\mathbf{g}}\frac{\mathbf{g}^{T}\mathbf{L}_{sym}\mathbf{g}}{ \mathbf{g}^{T}\mathbf{g}}, \tag{39}\]
where \(\mathbf{g}\) is a nonzero vector. let \(\mathbf{f}=\mathbf{D}^{-\frac{1}{2}}\mathbf{g}\), then we have
\[\begin{split}\frac{\mathbf{f}^{T}Lf}{\left(\mathbf{D}^{\frac{1}{2 }}\mathbf{f}\right)^{T}\left(\mathbf{D}^{\frac{1}{2}}\mathbf{f}\right)}& =\frac{\sum_{(u,v)\in E}\left(\mathbf{f}_{u}-\mathbf{f}_{v}\right)^ {2}}{\sum_{v\in V}f_{v}^{2}d_{v}}\\ &\leq\frac{\sum_{(u,v)\in E}\left(2f_{u}^{2}+2f_{v}^{2}\right)}{ \sum_{v\in V}f_{v}^{2}d_{v}}=2.\end{split} \tag{40}\]
When the graph is a binary graph, the equality sign of the inequality holds. Since in reality, as long as the graph is not too small, it is almost impossible to be a bipartite graph, so we will not discuss the case of bipartite graph. Therefore, under the assumption that it is not a bipartite graph, the maximum eigenvalue is less than 2. Since and are both
symmetric normalized Lapacian matrices of a graph, the only difference is that the graph corresponding to the former is added with a self-ring, so the eigenvalues of the former are also in the range \([0,2)\).
For the formula X, ignore the activation function we can get: \(\mathbf{H}^{(l)}=\hat{\mathbf{A}}\mathbf{H}^{(l-1)}\mathbf{W}^{(l)}\). Since \(\hat{\mathbf{A}}=\mathbf{I}_{N}-\tilde{\mathbf{L}}_{sym}\),
\[\begin{split}\hat{\mathbf{A}}^{K}&=\left(I_{N}- \tilde{L}_{sym}\right)^{K}=\left(\mathbf{I}_{N}-\mathbf{U}\mathbf{\Lambda} \mathbf{U}^{T}\right)^{K}\\ &=\mathbf{U}\left(\mathbf{I}_{N}-\mathbf{\Lambda}\right)^{K} \mathbf{U}^{T}=\sum_{i=1}^{N}\left(1-\lambda_{i}\right)^{K}\mathbf{u}_{i} \mathbf{u}_{i}^{T}.\end{split} \tag{41}\]
According to the range of eigenvalues proved above, the convergence state of \(\hat{\mathbf{A}}^{K}\) can be obtained:
\[\lim_{K\rightarrow+\infty}\hat{\mathbf{A}}^{K}=\mathbf{u}_{1}\mathbf{u}_{1}^{ T},\quad\mathbf{u}_{1}=\frac{\mathbf{D}^{\frac{1}{2}}\mathbf{1}}{\sqrt{M+N}}, \tag{42}\]
where \(M\) and \(N\) represent the number of edges and nodes, respectively,
\[\lim_{K\rightarrow\infty}\hat{\mathbf{A}}^{K}\mathbf{x}=C\times\left[\begin{array} []{c}\sqrt{d_{1}+1}\\ \sqrt{d_{2}+1}\\ \vdots\\ \sqrt{d_{N}+1},\end{array}\right] \tag{43}\]
where \(C\) is a constant, \(C=\frac{1}{M+N}\sum_{j=1}^{N}(\sqrt{d_{j}+1}x_{j})\). Therefore, when the number of layers \(K\) is large, the input graph signal has been completely smoothed off, and the remaining information is only the degree, and the graph signal is difficult to be linearly separable in Euclidean space. It leads to over smoothing. As the filter of conventional GCN variants are mainly defined over \(\tilde{\mathbf{L}}_{sym}\) and satisfy the above condition at extremely deep layers, thus they often suffer from the over-smoothing problem.
|
2302.11396 | KGTrust: Evaluating Trustworthiness of SIoT via Knowledge Enhanced Graph
Neural Networks | Social Internet of Things (SIoT), a promising and emerging paradigm that
injects the notion of social networking into smart objects (i.e., things),
paving the way for the next generation of Internet of Things. However, due to
the risks and uncertainty, a crucial and urgent problem to be settled is
establishing reliable relationships within SIoT, that is, trust evaluation.
Graph neural networks for trust evaluation typically adopt a straightforward
way such as one-hot or node2vec to comprehend node characteristics, which
ignores the valuable semantic knowledge attached to nodes. Moreover, the
underlying structure of SIoT is usually complex, including both the
heterogeneous graph structure and pairwise trust relationships, which renders
hard to preserve the properties of SIoT trust during information propagation.
To address these aforementioned problems, we propose a novel knowledge-enhanced
graph neural network (KGTrust) for better trust evaluation in SIoT.
Specifically, we first extract useful knowledge from users' comment behaviors
and external structured triples related to object descriptions, in order to
gain a deeper insight into the semantics of users and objects. Furthermore, we
introduce a discriminative convolutional layer that utilizes heterogeneous
graph structure, node semantics, and augmented trust relationships to learn
node embeddings from the perspective of a user as a trustor or a trustee,
effectively capturing multi-aspect properties of SIoT trust during information
propagation. Finally, a trust prediction layer is developed to estimate the
trust relationships between pairwise nodes. Extensive experiments on three
public datasets illustrate the superior performance of KGTrust over
state-of-the-art methods. | Zhizhi Yu, Di Jin, Cuiying Huo, Zhiqiang Wang, Xiulong Liu, Heng Qi, Jia Wu, Lingfei Wu | 2023-02-22T14:24:45Z | http://arxiv.org/abs/2302.11396v1 | # KGTrust: Evaluating Trustworthiness of SloT via Knowledge Enhanced Graph Neural Networks
###### Abstract.
Social Internet of Things (SIoT), a promising and emerging paradigm that injects the notion of social networking into smart objects (i.e., things), paving the way for the next generation of Internet of Things. However, due to the risks and uncertainty, a crucial and urgent problem to be settled is establishing reliable relationships within SIoT, that is, trust evaluation. Graph neural networks for trust evaluation typically adopt a straightforward way such as one-hot or node2vec to comprehend node characteristics, which ignores the valuable semantic knowledge attached to nodes. Moreover, the underlying structure of SIoT is usually complex, including both the heterogeneous graph structure and pairwise trust relationships, which renders hard to preserve the properties of SIoT trust during information propagation. To address these aforementioned problems, we propose a novel knowledge-enhanced graph neural network (KGTrust) for better trust evaluation in SIoT. Specifically, we first extract useful knowledge from users' comment behaviors and external structured triples related to object descriptions, in order to gain a deeper insight into the semantics of users and objects. Furthermore, we introduce a discriminative convolutional layer that utilizes heterogeneous graph structure, node semantics, and augmented trust relationships to learn node embeddings from the perspective of a user as a trustor or a trustee, effectively capturing multi-aspect properties of SIoT trust during information propagation. Finally, a trust prediction layer is developed to estimate the trust relationships between pairwise nodes. Extensive experiments on three public datasets illustrate the superior performance of KGTrust over state-of-the-art methods.
Graph Neural Networks, Trust Evaluation, Social Internet of Things +
Footnote †: journal: Computer graphics & neural networks
+
Footnote †: journal: Computer graphics & neural networks
straightforward ways such as one-hot or node2vec. More importantly, the observed trust relationships in real-world are often very sparse (Koren et al., 2017), which makes it difficult for these methods to fully model the multi-aspect properties of SIoT trust during information propagation, thus negatively influencing the prediction of trust relationships.
So an interesting yet important question is how to effectively design a GNN-based method for more accurate trust evaluation within SIoT. Particularly, two challenges need to be addressed. First, the rich node semantics within SIoT should be taken into consideration. As nodes in SIoT are generally associated with textual information such as comments or descriptions, it is crucial to deeply mine and encode these useful data to essentially embody the inherent characteristics of nodes. Second, multi-aspect properties of SIoT trust should be effectively preserved. The underlying structure of SIoT is usually complex, which contains not only the heterogeneous graph structure but also the pairwise trust relationships. Therefore, a reliable GNN-based method ought to make allowance for flexibly preserving the properties of SIoT trust (including asymmetric, propagative, and composable nature) when information propagates along the heterogeneous graph structure, especially when the observed trust connections are very sparse.
In light of the aforementioned challenges, we propose KGTrust, a knowledge enhanced graph neural network model for trust evaluation in SIoT. Specifically, we first design an embedding layer to fully model the semantics of users and objects by extracting useful and relevant knowledge from users' comment behaviors and external structured triples, respectively. We then introduce a personalized PageRank-based neighbor sampling strategy to augment the trust structure, alleviating the sparsity of user-specific trust relationships. After that, we employ a discriminative convolutional mechanism to learn node embeddings from the perspective of a user as a trustor or a trustee, and adaptively integrate them with a learnable gating mechanism. In this way, multi-aspect properties of SIoT trust, including asymmetric, propagative and composable nature, can be effectively preserved. Finally, the learned embeddings for pairwise users are concatenated and fed into a prediction layer to estimate their trust relationships.
We summarize our main contributions as follows:
* To the best of our knowledge, we are the first to gain a deeper insight into the trust evaluation within SIoT via jointly considering three key ingredients, that is, heterogeneous graph structure, node semantics and associated trust relationships.
* We present a novel knowledge enhanced graph neural network, named KGTrust, which innovatively mines and models the intrinsic characteristics of users and objects with the guidance of external knowledge and multi-aspect trust properties, for assessing trustworthiness in SIoT.
* Extensive experiments across three public datasets demonstrate the superior performance of the new approach KGTrust over state-of-the-art baselines.
## 2. Preliminaries
We first give the notations and problem definition, then introduce properties of SIoT trust, and finally discuss graph neural networks as the base of our proposed KGTrust.
### Notations and Problem Definition
**Definition 1. Social Trust Internet of Things.** A Social Trust Internet of Things, defined as \(G=(V,E,\mathcal{A},\mathcal{R},\psi,\varphi)\), is a form of heterogeneous directed network, where \(V=\{v_{1},\ldots,v_{n}\}\) and \(E=\{e_{ij}\}\subseteq V\times V\) represent the sets of nodes and edges, respectively. It is also associated with a node type mapping function \(\psi:V\rightarrow\mathcal{A}\), and an edge type mapping function \(\varphi:E\rightarrow\mathcal{R}\), where \(\mathcal{A}\in\{\text{user, object}\}\) and \(\mathcal{R}\in\{\text{user, user}\}\), \(\langle\text{user, object}\rangle\), \(\langle\text{object, user}\rangle\), \(\langle\text{object}\rangle\), \(\langle\text{object}\rangle\}\) denote the sets of node and edge types. All edges formulate an original adjacency matrix \(\mathbf{A}=(a_{ij})_{m\times n}\), where \(a_{ij}\) denotes the relation between nodes \(v_{i}\) and \(v_{j}\). Notice that, the edges representing the trust relationships between user nodes are asymmetric while the others are symmetric.
**Definition 2. Trust Evaluation.** Given a Social Trust Internet of Things \(G\), let \(T=\{(v_{i},v_{j}),t_{ij}|e_{ij}\in E\}\) be the set of observed trust relationships of user nodes, where nodes \(v_{i}\) and \(v_{j}\) denote trustor and trustee, respectively, and \(t_{ij}\) measures the trustworthiness from nodes \(v_{i}\) to \(v_{j}\), which is typically application specific. For example, in SIGCOMM-2009 dataset1, trustworthiness is simply divided into two types, that is, trust or distrust. Trust evaluation is to design a mapping \(\mathcal{F}\) to evaluate the trustworthiness of unobserved/missing trust relationship of the trustor-trustee pair \(\tilde{t}_{ij}\), where \(v_{i},v_{j}\in V\), \(v_{i}\neq v_{j}\), and \(e_{ij}\notin E\). Frequently used notations are summarized in Appendix A.
Footnote 1: [https://crawdad.org/thlab/sigcomm2009](https://crawdad.org/thlab/sigcomm2009)
### Properties of SIoT Trust
To essentially establish the trustworthiness between pairwise user nodes within SIoT, multi-aspect trust properties, including asymmetric, propagative and aggregative, should be considered.
**Asymmetric Nature.** The trust relationship between nodes is unequal, that is, node \(v_{i}\) trusts node \(v_{j}\) does not mean node \(v_{j}\) trusts node \(v_{i}\), shown as Figure 1(a). Formally, let \(t_{ij}\) be the trustworthiness of trustor-trustee pair \(\langle v_{i},v_{j}\rangle\), the asymmetry nature of trust is expressed as:
\[t_{ij}\neq t_{ji}. \tag{1}\]
**Propagative Nature.** It indicates that trust may be propagated from one node to another, creating a trust chain for two nodes that are not explicitly connected. As shown in Figure 1(b), assuming that node \(v_{i}\) trusts node \(v_{j}\) and node \(v_{j}\) trusts node \(v_{k}\), it can then
Figure 1. The illustration of properties of SIoT trust.
be inferred that node \(v_{i}\) may trust node \(v_{k}\) to a certain extent. The propagation nature of trust is defined as:
\[t_{ij}\wedge t_{jk}\Rightarrow t_{ik}. \tag{2}\]
**Composable Nature.** It refers to the fact that the trustworthiness propagated by a node from different trust chains to another non-neighbor node can be aggregated. For example, in Figure 1(e), there are two trust chains among node \(v_{i}\) and node \(v_{j}\), that is, \(v_{i}\to v_{k}\to v_{j}\) and \(v_{i}\to v_{m}\to v_{n}\to v_{j}\). As a result, it is necessary to aggregate the trustworthiness from these two trust chains to determine the trust relationship from node \(v_{i}\) to node \(v_{j}\).
### Graph Neural Networks
Graph neural networks (GNNs) are a kind of neural networks that directly operate on graph-structured data (Garf et al., 2017; He et al., 2017). They typically follow the message passing framework, which learns embedding of each node through iteratively propagating and aggregating feature information from its topological neighbors. Mathematically, let \(\mathbf{h}_{i}^{(l)}\) be the latent embedding of node \(v_{i}\) at the \(l\)-th layer, the message passing process is defined as:
\[\begin{split}\mathbf{m}_{i}^{(l)}=\text{AGG}^{(l)}((\mathbf{h}_ {j}^{(l-1)}:v_{j}\in\mathcal{N}(v_{i}))),\\ \mathbf{h}_{i}^{(l)}=\text{UPD}^{(l)}(\mathbf{h}_{i}^{(l-1)}, \mathbf{m}_{i}^{(l)}),\end{split} \tag{3}\]
where AGG and UPD denote the functions to aggregate and update the message, \(\mathbf{h}_{i}^{(0)}\) represents the node's attributes, and \(\mathcal{N}(v_{i})\) denotes the set of neighbors of node \(v_{i}\).
## 3. Methodology
We first give a brief overview of the proposed method, and then introduce three key components in detail.
### Overview
To effectively assess the potential trust relationships within SIoT, we propose a novel knowledge enhanced graph neural network that can fully mine the inherent characteristics of nodes and model the multi-aspect properties of SIoT trust, namely KGTrust. The whole structure of KGTrust is illustrated in Figure 2, which consists of three main components, that is, embedding layer, heterogeneous convolutional layer as well as predictor layer. Specifically, for the embedding layer, we initialize the embeddings of users and objects by extracting useful and related knowledge from users' comment behaviors and external structured triples, so as to fully explore the intrinsic characteristics of users and objects. For the heterogeneous convolutional layer, we utilize the propagative nature of SIoT trust to augment the trust structure, and then introduce a discriminative convolutional mechanism to learn both node and object embeddings by considering the role of a user as a trustor or a trustee, respectively. After that, we leverage a learnable gating mechanism to adaptively integrate these two types of user embeddings, capturing the asymmetric nature of SIoT trust. For the prediction layer, a single-layer multiple perceptron is introduced to predict the trust relationships between user pairs based on user embeddings.
Figure 2. The architecture of KGTrust, which is constituted of three key components: 1) Embedding Layer: a comprehensive user and object modeling by integrating user comments and external knowledge triples; 2) Heterogeneous Convolutional Layer: a knowledge enhanced graph neural network to further mine and learn node latent embeddings; as well as 3) Prediction Layer: measuring the trust relationships between user pairs.
### Embedding Layer
To gain a deeper insight into the inherent characteristics of users and objects within SloT, we initialize user and object embeddings, which is the cornerstone of GNNs, by taking account of the users' comment behaviors and external knowledge related to object descriptions, respectively.
**User Embedding.** In SloT, users typically deliver their opinions on objects by providing comments in the form of text, which reflects the characteristics of users to a certain extent. Based on this, for each user, we employ Doc2vec (Deng et al., 2019), an unsupervised algorithm to learn fixed-length node embeddings for texts with variable lengths, to initialize its embedding. Specifically, for a user node \(v_{i}\), let \(d_{i}\) be the set containing all the comments delivered by \(v_{i}\), the user embedding \(\mathbf{h}_{i}\) can then be calculated as:
\[\mathbf{h}_{i}=\text{Doc2vec}(d_{i}). \tag{4}\]
**Object Embedding.** For object nodes, we capture their characteristics by integrating structured knowledge associated with object descriptions (i.e., head-predicate-tail triplets) from the knowledge graph. Here we employ TransE (Deng et al., 2019), a simple and effective approach, to parameterize triplets to learn object embeddings. It encodes the head (or tail) node as a low-dimensional embedding and the relation as algebraic operations between head and tail embeddings. Given a triplet \((h,r,t)\), let \(\mathbf{r}\) be the embedding of relation \(r\), \(\mathbf{h}\) and \(\mathbf{t}\) be the embeddings of objects \(h\) and \(t\), respectively. TransE aims to embed each object and relation by optimizing the translation principle \(\mathbf{h}+\mathbf{r}\approx\mathbf{t}\), if \((h,r,t)\) holds. The score function is formulated as:
\[f(h,r,t)=-||\mathbf{h}+\mathbf{r}-\mathbf{t}||_{2}^{2}, \tag{5}\]
where \(\mathbf{h}\) and \(\mathbf{t}\) are subject to the normalization constraint that the magnitude of each vector is 1. Intuitively, a large score of \(f(h,r,t)\) indicates that the triplet is more likely to be a true fact in real-world, and vice versa. Note that we only consider triplets where object nodes within the SloT are head instead of tail. In this way, the object embeddings can be effectively enriched at a semantic level.
**Embedding Transformation.** Considering that the generated user and object embeddings may have unequal dimensions, or even be lied in different embedding spaces, we need to project these two types of embeddings into the same embedding space. For a node \(v_{i}\) with type \(\psi_{i}\), we project its embeddings into the same latent embedding space using a type-specific linear transformation \(\mathbf{W}_{\psi_{i}}\):
\[\mathbf{h}_{i}^{\prime}=\mathbf{W}_{\psi_{i}}\cdot\mathbf{h}_{i}, \tag{6}\]
where \(\mathbf{h}_{i}\) and \(\mathbf{h}_{i}^{\prime}\) are the original and projected embedding of node \(v_{i}\), respectively.
### Heterogeneous Convolutional Layer
After initializing the user and object embeddings, we further design a heterogeneous convolutional layer, which takes the multi-aspect trust properties into consideration, so as to better assess trustworthiness among users in SloT. It mainly consists of three modules: personalized PageRank (PPR)-based neighbor sampling, information propagation and information fusion.
**PPR-Based Neighbor Sampling.** Generally, the available user-specified trust relationships within SloT are often very sparse, that is, a limited number of user pairs with trust relationships are buried in a large proportion of user pairs without trust relationships, making trust evaluation an arduous task (Kang et al., 2019). To this end, we consider employing personalized PageRank, which shows effectiveness in graph neural networks (Kipf and Welling, 2017), to augment the trust structure.
Personalized PageRank (Papnik et al., 2017) adopts a random walk with restart based strategy that uses the propagative nature of SloT trust to calculate the correlation between nodes. It takes the graph structure as input, and computes a ranking score \(p_{ij}\) from source node \(v_{i}\) to target node \(v_{j}\), where the larger \(p_{ij}\), the more similar these two nodes. Formally, given a SloT \(G=(V,E)\), let \(\hat{\mathbf{A}}=\hat{\mathbf{D}}^{-\frac{1}{2}}\hat{\mathbf{A}}\hat{\mathbf{D }}^{-\frac{1}{2}}\) be the normalized adjacency matrix, where \(\hat{\mathbf{A}}=\mathbf{A}+\mathbf{I}\) stands for the adjacency matrix with self-loops, the PPR matrix \(\mathbf{P}\) is calculated as:
\[\mathbf{P}=(1-\lambda)\hat{\mathbf{A}}\mathbf{P}+\lambda\mathbf{I}, \tag{7}\]
where \(\lambda\) is the reset probability. It is worth noting that we use a push iteration method to compute PPR scores according to the existing work (Beng et al., 2019), which can be approximated effectively even for very large networks (Kipf and Welling, 2017).
Then, the augmented trust relationships of each user node \(v_{i}\) can be constructed by choosing its top \(k\) PPR neighbors:
\[N_{i}=\operatorname*{arg\,max}_{V:<V_{U},|V^{*}|=k}\sum_{\psi_{j}\in V^{*}}p_{ ij}, \tag{8}\]
where \(V_{U}\) represents the set of user nodes in SloT. In this way, several long-range but informative trust relationships can be captured, and further promote the modeling of user nodes.
**Information Propagation.** Due to the asymmetry of trust relationships in SloT, each user may have dual roles as a trustor or trustee. For this purpose, we consider propagating node embeddings over both trustor role and trustee role, so as to extract two specific embeddings in these two roles.
Specifically, from the perspective of trustor, we learn the embeddings of users and objects through users' augmented outgoing trust relationships, objects' connections, and interactions between user-object pairs. Mathematically, given a target node \(v_{i}\), as different types of neighbor nodes (user or object) may have different impacts on it, we employ type-level attention (Kipf and Welling, 2017) to learn the importance of different types of neighbor nodes. Let \(\hat{\mathbf{A}}_{O}=[\hat{a}_{ij}]\) be the normalized adjacency matrix which is related to the trustor role, \(\mathbf{h}_{\psi}\) be the embedding of type \(\psi\), which is defined as the sum of the neighbor node embedding \(\mathbf{h}_{j}^{\prime}\) with node \(v_{j}\in\mathcal{N}_{i}\) under type \(\psi\), that is:
\[\mathbf{h}_{\psi}=\sum_{\psi_{j}}\hat{a}_{ij}\mathbf{h}_{j}^{\prime}. \tag{9}\]
Based on the target node embedding \(\mathbf{h}_{i}^{\prime}\) and its corresponding type embedding \(\mathbf{h}_{\psi}\), the type-level attention weights can then be calculated as:
\[a_{\psi}=\text{softmax}_{\psi}(\sigma(\eta_{\psi}^{T}[\mathbf{h}_{i}^{\prime},\mathbf{h}_{\psi}])), \tag{10}\]
where \(\eta_{\psi}\) is the attention vector for the type \(\psi\), and softmax is adopted to normalize across all the types.
In addition, considering that different neighbor nodes of the same type could also have different importance, we further apply node-level attention (Kipf and Welling, 2017) to learn the weights between nodes of the same type. Formally, given a target node \(v_{i}\) with type \(\psi\), let \(v_{j}\) be its neighbor node with type \(\psi^{\prime}\), the node-level attention weights
can then be computed as:
\[\beta_{ij}=\text{softmax}_{v_{j}}(\sigma(\mathbf{y}^{T}\cdot\alpha_{\psi^{ \prime}}[\mathbf{h}_{i}^{\prime},\mathbf{h}_{j}^{\prime}])), \tag{11}\]
where \(\gamma\) is the attention vector, and softmax is applied to normalize across all the neighbor nodes of the target node \(v_{i}\).
By integrating the above process, the matrix form of the layer-wise propagation rule can be defined as follows:
\[\mathbf{H}^{(l)}=\sigma(\sum_{i\neq\sigma,\mathbf{\overline{n}}}\mathbf{B}_{ \psi}\cdot\mathbf{H}_{\psi}^{(l-1)}\cdot\mathbf{W}_{\psi}^{(l-1)}), \tag{12}\]
where \(\mathcal{A}\) is the set of node types in SloT, and \(\mathbf{B}_{\psi}=(\beta_{ij})_{m\times n}\) represents the attention matrix. In this way, the specific information about trustor role can be obtained.
As for the trustee role, we learn the node embeddings via users' augmented incoming trust relationships, objects' connections, and interactions between user-object pairs, which can be calculated in the same way as in trustor role. Therefore, the specific information about trustee role can be captured by generating the node embeddings \(\mathbf{\overline{H}}\).
**Information Fusion.** In order to achieve the optimal combination of embeddings of a user in different roles (trustee or trustor) for downstream trust evaluation, we introduce a learnable gating mechanism (Wang et al., 2017) to determine how much the joint embedding depends upon the role of trustor or trustee. Given a user node \(v_{i}\), let \(\mathbf{h}_{i}\) and \(\mathbf{\overline{h}}_{i}\) represent its embeddings as trustor role and trustee role, respectively, the joint representation \(\mathbf{z}_{i}\) can be calculated as:
\[\mathbf{z}_{i}=\mathbf{g}_{e}\odot\mathbf{h}_{i}+(1-\mathbf{g}_{e})\odot \mathbf{\overline{h}}_{i}, \tag{13}\]
where \(\mathbf{g}_{e}\) is a gating vector with elements in \([0,1]\) to balance embeddings, and \(\odot\) represents element-wise multiplication. Obviously, the joint embedding with gate closer to \(0\) tends to use the embedding of a user as a trustee; whereas the joint embedding with gate closer to \(1\) utilizes the embedding of a user as a trustor. More importantly, to constrain the value of each element in \([0,1]\), we apply sigmoid function to calculate the gate \(\mathbf{g}_{e}\) as:
\[\mathbf{g}_{e}=\text{sigmoid}(\mathbf{\tilde{g}}_{e}), \tag{14}\]
where \(\mathbf{\tilde{g}}_{e}\) is a real-value vector that is learned during training.
### Predictor Layer
To convert the learned user embeddings into the latent factor of trust relationship in SloT, for a given user pair \(\langle v_{i},v_{j}\rangle\), we first concatenate the embeddings of nodes \(v_{i}\) and \(v_{j}\), and then feed them to a multiple perceptron (MLP) followed by a softmax function as:
\[\tilde{y}_{ij}=\text{softmax}(\text{MLP}(\mathbf{z}_{i}\parallel\mathbf{z}_ {j})), \tag{15}\]
where \(\parallel\) is the concatenation operator, and \(\tilde{y}_{ij}\) is the predicted probability that the user pair \(\langle v_{i},v_{j}\rangle\) belongs to a trusted pair or a distrusted pair.
Finally, we define the trust evaluation loss function by using cross entropy as:
\[\mathcal{L}=-\sum_{v_{i}v_{j}}y_{ij}\ln\tilde{y}_{ij}, \tag{16}\]
where \(y_{ij}\) denotes the ground truth of trust relationship of user pair \(\langle v_{i},v_{j}\rangle\). In particular, we employ the back propagation algorithm and Adam optimizer to train the model.
## 4. Experiments
We first introduce the experimental setup, and then compare the new approach KGTrust with state-of-the-arts in terms of effectiveness and robustness. We finally present an in-depth analysis of different components of KGTrust and give the parameter analysis.
### Experimental Setup
**Datasets.** We conduct experiments on three widely used SloT datasets, namely FilmTrust2, Ciao3 and Epinions3, where the basic information is summarized in Table 1. More details of datasets are provided in Appendix B.1.
Footnote 2: [http://www.librec.net/datasets.html](http://www.librec.net/datasets.html)
Footnote 3: [http://www.cse.msu.edu/~tanggill/trust.html](http://www.cse.msu.edu/~tanggill/trust.html)
**Baselines.** We compare KGTrust with eight state-of-art methods. They include: 1) the network embedding methods GAT (Kip
specific, in terms of accuracy, the improvement of KGTrust over different baselines ranges from 2.08% to 26.77%, 0.39% to 22.39%, and 0.57% to 23.01% on FilmTrust, Ciao, and Epinions, respectively. In terms of F1-Score, the improvement of KGTrust over different baselines ranges from 1.14% to 16.29%, 0.80% to 7.78%, and 0.73% to 17.04% on these three datasets. These results not only demonstrate the superiority of enriching node semantics with node-related knowledge, but also validate the effectiveness of flexibly preserving the multi-aspect properties of SIoT trust during information propagation. Particularly, the performance of KGTrust is much better
\begin{table}
\begin{tabular}{c|c|c c c c c c c c|c} \hline Datasets & Metrics & GAT & SGC & SLF & STNE & SNEA & DeepTrust & AtNE-Trust & Guardian & KGTrust \\ \hline \multirow{4}{*}{FilmTrust} & Accuracy & 68.29 & 75.61 & 65.55 & 72.87 & 63.91 & 53.05 & 63.11 & 77.74 & **79.82** \\ \cline{2-13} & F1-Score & 71.74 & 77.14 & 65.65 & 73.27 & 66.67 & 64.63 & 65.13 & 79.78 & **80.92** \\ \hline \multirow{4}{*}{Ciao} & Accuracy & 64.28 & 69.93 & 72.17 & 71.33 & 68.97 & 50.17 & 68.23 & 72.17 & **72.56** \\ \cline{2-13} & F1-Score & 71.36 & 70.34 & 73.39 & 71.38 & 70.83 & 66.52 & 71.50 & 73.50 & **74.30** \\ \hline \multirow{4}{*}{ Epinions} & Accuracy & 72.05 & 78.62 & 80.83 & 79.51 & 74.63 & 58.38 & 74.35 & 80.82 & **81.39** \\ \cline{2-13} & F1-Score & 75.57 & 78.76 & 80.95 & 78.57 & 74.92 & 64.80 & 74.88 & 81.11 & **81.84** \\ \hline \end{tabular}
\end{table}
Table 2. Performance comparions on three SIoT datasets in terms of Accuracy (%) and F1-Score (%). (bold: best)
\begin{table}
\begin{tabular}{c|c|c|c c c c c c c c|c} \hline Datasets & Metrics & Training & GAT & SGC & SLF & STNE & SNEA & DeepTrust & AtNE-Trust & Guardian & KGTrust \\ \hline \multirow{4}{*}{FilmTrust} & \multirow{4}{*}{Accuracy} & 50\% & 60.36 & 71.26 & 54.96 & 69.42 & 60.14 & 49.51 & 60.17 & 74.14 & **74.94** \\ & & 60\% & 62.79 & 72.21 & 55.51 & 69.98 & 61.01 & 50.08 & 60.72 & 74.81 & **76.11** \\ & & 70\% & 64.39 & 73.16 & 61.22 & 72.14 & 62.90 & 50.20 & 62.14 & 75.51 & **78.16** \\ & & 80\% & 67.28 & 74.01 & 63.61 & 72.78 & 63.30 & 51.68 & 63.00 & 76.45 & **79.66** \\ & & 90\% & 68.29 & 75.61 & 65.55 & 72.87 & 63.91 & 53.05 & 63.11 & 77.74 & **79.82** \\ \cline{2-13} & \multirow{4}{*}{F1-Score (\%)} & 50\% & 62.35 & 71.52 & 56.48 & 69.44 & 62.45 & 60.11 & 60.27 & 75.52 & **75.98** \\ & & 60\% & 63.51 & 72.40 & 57.37 & 70.17 & 62.68 & 60.38 & 61.69 & 76.52 & **76.73** \\ & & 70\% & 66.80 & 73.83 & 62.23 & 72.17 & 64.83 & 62.20 & 63.42 & 78.56 & **78.94** \\ & & 80\% & 68.34 & 74.40 & 65.00 & 72.33 & 65.12 & 63.41 & 63.88 & 79.08 & **80.47** \\ & & 90\% & 71.74 & 77.14 & 65.65 & 73.27 & 66.67 & 64.63 & 65.13 & 79.78 & **80.92** \\ \hline \multirow{4}{*}{Ciao} & \multirow{4}{*}{Accuracy} & 50\% & 59.76 & 67.40 & 71.32 & 70.69 & 66.88 & 49.80 & 62.24 & 71.27 & **71.72** \\ & & 60\% & 61.03 & 68.29 & 71.66 & 70.87 & 67.82 & 50.01 & 62.66 & 71.62 & **72.11** \\ \cline{1-1} & & 70\% & 62.17 & 68.39 & 71.89 & 70.92 & 68.15 & 50.03 & 63.52 & 71.90 & **72.34** \\ \cline{1-1} & & 80\% & 63.01 & 68.81 & 72.08 & 71.05 & 68.53 & 50.07 & 66.58 & 71.94 & **72.36** \\ \cline{1-1} & & 90\% & 64.28 & 69.93 & 72.17 & 71.33 & 68.97 & 50.17 & 68.23 & 72.17 & **72.56** \\ \cline{1-1} \cline{2-13} & \multirow{4}{*}{F1-Score (\%)} & 50\% & 66.47 & 67.53 & 71.87 & 70.83 & 67.68 & 61.30 & 62.76 & 71.84 & **72.85** \\ \cline{1-1} & & 60\% & 68.08 & 68.58 & 72.68 & 70.85 & 68.87 & 61.38 & 63.03 & 72.28 & **73.11** \\ \cline{1-1} & & 70\% & 70.61 & 68.78 & 72.88 & 71.07 & 69.45 & 61.77 & 65.37 & 72.67 & **73.23** \\ \cline{1-1} & & 80\% & 70.85 & 69.76 & 73.00 & 71.32 & 70.15 & 63.63 & 69.92 & 73.32 & **74.06** \\ \cline{1-1} & & 90\% & 71.36 & 70.34 & 73.39 & 71.38 & 70.83 & 66.52 & 71.50 & 73.50 & **74.30** \\ \hline \multirow{4}{*}{ Epinions} & \multirow{4}{*}{Accuracy} & 50\% & 61.70 & 77.22 & 79.99 & 79.04 & 73.84 & 55.53 & 71.90 & 80.15 & **80.59** \\ & & 60\% & 61.92 & 77.57 & 80.05 & 79.13 & 74.12 & 56.25 & 73.01 & 80.22 & **80.65** \\ \cline{1-1} & & 70\% & 64.76 & 77.82 & 80.44 & 79.32 & 74.36 & 56.71 & 73.40 & 80.31 & **80.96** \\ \cline{1-1} & & 80\% & 70.79 & 78.17 & 80.60 & 79.45 & 74.59 & 58.23 & 73.59 & 80.55 & **81.14** \\ \cline{1-1} & & 90\% & 72.05 & 78.62 & 80.83 & 79.51 & 74.63 & 58.38 & 74.35 & 80.82 & **81.39** \\ \cline{1-1} \cline{2-13} & \multirow{4}{*}{F1-Score (\%)} & 50\% & 65.60 & 77.63 & 80.08 & 78.18 & 73.28 & 61.27 & 72.87 & 80.41 & **81.05** \\ \cline{1-1} & & 60\% & 66.64 & 77.92 & 80.15 & 78.22 & 73.73 & 63.93 & 73.74 & 80.51 & **81.11** \\ \cline
than that of vanilla GAT (i.e., 11.53%, 8.28%, 9.34% relative improvements in accuracy, and 9.18%, 2.94%, 6.27% relative improvements in F1-Score), which further shows the significance of jointly considering three key ingredients within SIoT, namely heterogeneous graph structure, node semantics and associated trust relationships. Neither DeepTrust nor AtNE-Trust is so competitive here, which is mainly because they fail to make better use of the information propagation over graph structure, seriously affecting their performance for trust evaluation.
**Robustness.** To further measure the stable ability of our KGTrust and baselines, we conduct experiments across all training and testing set ratios. The ratio of training set is set as \(x\%\) and the remaining \((1-x)\%\) as the testing set, where \(x\) belongs to [50, 60, 70, 80, 90] over three datasets. We run each method 10 times and report the average performance in terms of accuracy and F1-Score.
The results are shown in Table 3. As shown, the proposed method KGTrust always performs the best across different training ratios and datasets. Specifically, when fewer observed trust relationships are provided, the performances of baselines are surprisingly reduced, especially the classical GNN-based methods such as GAT, while our model still achieves relatively high performance. This demonstrates that our method can better assess trust relationships by validly alleviating data sparsity with personalized PageRank-based neighbor sampling. Moreover, as the ratio of observed trust relationships increases, KGTrust consistently maintains superior and achieves the best performance when the training set ratio is 90%, which validates the effectiveness and robustness of the proposed approach. Also of note, KGTrust outperforms Guardian, which uses GNN for trust evaluation, in all cases, further indicating the rationality of fully mining the intrinsic characteristics of nodes with the guidance of node-related knowledge.
### Ablation Study
Similar to most deep learning models, our proposed KGTrust also contains some important components that may have a significant impact on the performance. To test the effectiveness of each component, we conduct experiments by comparing KGTrust with five variations. The variants are as follows: 1) KGTrust of using random vectors instead of introducing structured triples to initialize object embeddings, named as w/o Triples, 2) KGTrust of removing PPR-based neighbor sampling, named as w/o PPR, 3) KGTrust of removing the trustee role of a node, and aggregating information only from its trustor role, named as w/o Trustee, 4) KGTrust of removing the trustor role of a node, and aggregating information only from its trustee role, named as w/o Trustor, and 5) KGTrust of employing concatenation operator instead of gating mechanism to fuse the two role embeddings (truster or trustee) of a user node, named as KGTrust (Con).
From the results in Table 4, we can draw the following conclusions: 1) The results of KGTrust are consistently better than its five variants, indicating the effectiveness and necessity of jointly taking into account of heterogeneous graph structure, node semantics and associated trust relationships within SIoT. 2) Removing structured triples or PPR-based neighbor sampling strategy leads to slight performance drop, which demonstrates the usefulness of employing external knowledge to deeply mine object semantics and propagative nature of trust to enrich the trust relationships. 3) Neither KGTrust w/o Trustee nor KGTrust w/o Trustor is so competitive here, making us realize the importance of considering the trust asymmetry nature during information propagation. 4) Compared to KGTrust (Con), the improvement brought by KGTrust is more significant, which illustrates the rationality of adaptively fusing the embeddings of dual roles of a user.
### Parameter Analysis
We investigate the sensitivity of two main parameters, including the top \(k\) for PPR-based neighbor sampling and the dimension of final embeddings, on Ciao and Epinions datasets. Results on FilmTrust dataset can be found in Appendix C.
**Analysis of \(k\).** The parameter \(k\) determines the number of trust relationships augmented by each user node with PPR-based neighbor sampling. We vary its value from 10 to 50 and the corresponding results are shown in Figure 3. With the increase of augmented trust relationships, the performance shows a trend of first rising and then descending. This is probably because a small number of augmented trust relationships are not enough to obtain informative node embeddings, whereas too many augmented trust relationships may introduce noise and thus weaken the information propagation.
**Analysis of Final Embedding Dimension.** We test the effect of the dimension of final embedding, and vary it from 16 to 256. The result is shown in Figure 4. With the increase of the dimension of final embeddings, the values of metrics, including accuracy and F1-Score, increase first and then start to decrease. It is reasonable since KGTrust needs a suitable dimension to en
\begin{table}
\begin{tabular}{l|c c|c c|c c} \hline \hline \multirow{2}{*}{Datasets} & \multicolumn{2}{c|}{FlimTrust} & \multicolumn{2}{c|}{Ciao} & \multicolumn{2}{c}{Epinions} \\ \cline{2-7} & Accuracy & F1-Score & Accuracy & F1-Score & Accuracy & F1-Score \\ \hline KGTrust & **79.82** & **80.92** & **72.56** & **74.30** & **81.39** & **81.84** \\ - w/o Triples & - & - & 71.10 & 72.48 & 80.51 & 80.86 \\ - w/o PPR & 78.29 & 78.74 & 72.12 & 72.88 & 80.71 & 81.19 \\ - w/o Trustee & 78.13 & 79.18 & 59.07 & 64.60 & 70.73 & 72.10 \\ - w/o Trustor & 77.22 & 78.37 & 60.58 & 65.67 & 70.62 & 72.01 \\ KGTrust (Con) & 76.76 & 77.51 & 59.28 & 64.74 & 70.75 & 73.10 \\ \hline \hline \end{tabular}
\end{table}
Table 4. Comparisons of our KGTrust and its five variants on three SIOT datasets in terms of Accuracy (%) and F1-Score (%). For FilmTrust, we do not introduce structured triples as no object descriptions provided, while such information is provided by Ciao and Epinions.
Figure 3. The performance with different numbers of top \(k\) for PPR-based neighbor sampling.
within SIoT, including heterogeneous graph structure, node semantics and associated trust relationships, while larger dimensions may introduce additional redundancies, affecting the performance of assessing trustworthiness.
## 5. Related Work
We briefly review some literatures that are related to our work, that is, trust evaluation and knowledge graph neural networks.
### Trust Evaluation
Trust plays a crucial role in assisting users gather reliable information and trust evaluation. predicting the unobserved pairwise trust relationships among users, draws considerable research interest. Existing trust evaluation methods can be roughly divided into three categories, including random walk-based methods, matrix factorization-based methods and deep learning-based methods.
**Random Walk-Based methods.** In the past decade, many attentions have been paid to exploit trust propagation along the path from the trustor to the trustee to assess trustworthiness. For example, ModelTrust (Zhou et al., 2018) conducts motivating experiments to analyze the difference in accuracy and coverage between local and global trust measures, and then introduces a local trust metric to predict trustworthiness of unknown users. OptimalTrust (Zhou et al., 2018) designs a new concept, namely quality of trust, to search the optimal trust path to generate the most trustworthy evaluation. After that, AssessTrust (Zhou et al., 2018) introduces three-valued subjective logic (3VSL) that effectively considers the information in trust propagation and aggregation to assess multi-hop interpersonal trust. OpinionWalk (Zhou et al., 2018) computes the trustworthiness between any two users based on 3VSL and breadth-first search strategy.
**Matrix Factorization-Based Methods.** Several recent studies have extended matrix factorization to trust evaluation (Zhou et al., 2018), where the basic idea is to learn the low-rank representations of users and their correlations by incorporating prior knowledge and node-related data. For instance, hTrust (Zhou et al., 2018) presents an unsupervised method that exploits low-rank matrix factorization and homophily effect for trust evaluation. mTrust (Zhou et al., 2018) argues that the trust relationships among users are usually multiple and heterogeneous, and accordingly designs a fine-grained representation to incorporate multi-faceted trust relationships. sTrust (Zhou et al., 2018) proposes a trust prediction model by considering social status theory that reflects users' position or level in the social community.
**Neural Network-Based Methods.** As neural networks become the most eye-catching tools for tracking graphs, several efforts have been devoted to utilizing neural networks to boost the performance of trust evaluation. For example, NeuralWalk (Zhou et al., 2018) designs a neural network architecture that models single-hop trust propagation and trust combination for assessing trust relationships. DeepTrust (Zhou et al., 2018) presents a deep trust evaluation model which effectively mines user characteristics by introducing associated user attributes. AtNETrust (Zhou et al., 2018) improves the performance of trust relationship prediction by jointly capturing the properties of trust network and multi-view user attributes. C-DeepTrust (Zhou et al., 2018) points out that the user preference may change due to the drifts of their interests, and accordingly integrates both the static and dynamic user preference to tackle the trust evaluation problem. Guardian (Guardian, 2018) estimates the trustworthiness between any two users by simultaneously capturing both social connections and trust relationships. GATrust (Zhou et al., 2018) presents a GNN-driven approach which integrates multiple node attributes and the observed trust interactions for trust evaluation.
Despite various trust evaluation algorithms or models have been developed, they still suffer from an inability to comprehensively mine and encode the node semantics within SIoT and multi-aspect properties of SIoT trust.
### Knowledge Graph Neural Networks
To enable more effective learning on graph-structured data, researchers have dedicated to employing external knowledge(Zhou et al., 2018; Zhou et al., 2018), such as large-scale knowledge bases Wikipedia and Freebase, to enrich node representations and apply them to downstream tasks. For example, KGAT (Kog et al., 2017) presents a knowledge-aware recommendation approach by explicitly modeling the high-order connectivity with semantic relations in collaborative knowledge graph. COMPGCN (Zhou et al., 2018) learns both node and relation embeddings in a multi-relational graph via using entity-relation composition operations from knowledge graph embedding. Caps-GNN (Kang et al., 2018) designs a novel personalized review generation approach with structural knowledge graph data and capsule graph neural networks. Later on, RECON (Zhou et al., 2018) points out that knowledge graph can provide valuable additional signals for short sentences, and develops a sentence relation extraction integrating external knowledge. KCGN (Kang et al., 2018) proposes an end-to-end model that jointly injects knowledge-aware user- and item-wise dependent structures for social recommendation. However, how to utilize the guidance of external knowledge to facilitate the understanding of node semantics within SIoT, and further assess trustworthiness among users is still an area that needs to be explored urgently.
## 6. Conclusion
In this paper, we present a novel knowledge enhanced graph neural network, namely KGTrust, for trust evaluation in Social Internet of Things. In specific, we comprehensively incorporate the rich node semantics within SIoT by deeply mining and encoding node-related information. Considering that the observed trust relationships are often relatively sparse, we use personalized PageRank-based neighbor sampling strategy to enrich the trust structure. To further maintain the multi-aspect properties of SIoT trust, we learn effective node embeddings by employing a discriminative convolutional mechanism that considers the propagative and composable nature from the perspective of a user as a trustor or a trustee, respectively.
Figure 4. The performance with different dimensions of final embedding.
After that, a learnable gating mechanism is introduced to adaptively integrate the information from dual roles of a user. Finally, the learned embeddings for pairwise users are concatenated for a trust relationship predictor. Extensive experimental results demonstrate the superior performance of the proposed new approach over state-of-the-arts across three benchmark datasets.
|
2301.09923 | Lee-Yang theory of quantum phase transitions with neural network quantum
states | Predicting the phase diagram of interacting quantum many-body systems is a
central problem in condensed matter physics and related fields. A variety of
quantum many-body systems, ranging from unconventional superconductors to spin
liquids, exhibit complex competing phases whose theoretical description has
been the focus of intense efforts. Here, we show that neural network quantum
states can be combined with a Lee-Yang theory of quantum phase transitions to
predict the critical points of strongly-correlated spin lattices. Specifically,
we implement our approach for quantum phase transitions in the transverse-field
Ising model on different lattice geometries in one, two, and three dimensions.
We show that the Lee-Yang theory combined with neural network quantum states
yields predictions of the critical field, which are consistent with large-scale
quantum many-body methods. As such, our results provide a starting point for
determining the phase diagram of more complex quantum many-body systems,
including frustrated Heisenberg and Hubbard models. | Pascal M. Vecsei, Christian Flindt, Jose L. Lado | 2023-01-24T11:10:37Z | http://arxiv.org/abs/2301.09923v2 | # Lee-Yang theory of quantum phase transitions with neural network quantum states
###### Abstract
Predicting the phase diagram of interacting quantum many-body systems is a central problem in condensed matter physics and related fields. A variety of quantum many-body systems, ranging from unconventional superconductors to spin liquids, exhibit complex competing phases whose theoretical description has been the focus of intense efforts. Here, we show that neural network quantum states can be combined with a Lee-Yang theory of quantum phase transitions to predict the critical points of strongly-correlated spin lattices. Specifically, we implement our approach for quantum phase transitions in the transverse-field Ising model on different lattice geometries in one, two, and three dimensions. We show that the Lee-Yang theory combined with neural network quantum states yields predictions of the critical field, which are consistent with large-scale quantum many-body methods. As such, our results provide a starting point for determining the phase diagram of more complex quantum many-body systems, including frustrated Heisenberg and Hubbard models.
## I Introduction
Solving a generic family of quantum many-body problems and ultimately predicting their phase diagram is a challenging task [1; 2]. The exponential growth of the Hilbert space with the system size, especially for high dimensional systems, makes most realistic models intractable in practice. Some problems, such as the transverse-field Ising model in one dimension, can be solved analytically [3]. However, more generally, obtaining the phase diagram of an interacting quantum many-body system is a critical open problem. To this end, several numerical tools have been developed, including Monte Carlo simulations [4], and tensor-network algorithms [5]. Nevertheless, despite considerable progress, the phase diagram of many quantum systems in two and three dimensions remain unknown [6; 7].
Neural network quantum states are a recently developed class of variational states [8] that have shown great potential for parametrizing and finding the ground state of interacting quantum many-body systems [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26]. Neural network quantum states represent the wave function of a quantum many-body system as a neural network. Specifically, the neural network is a parametrized function that takes the configuration of a many-body system as the input and outputs the corresponding amplitude and phase of the wave function. By optimizing the parameters of the neural network, so that the energy is minimized, an accurate approximation of the ground state can be found. Neural network quantum states exploit the fact that neural networks can faithfully represent many complex functions [27], including a variety of quantum many-body wave functions. They have already been applied to find the wave functions of several spin models [9; 10; 11; 12; 13; 14; 28], including the \(J_{1}-J_{2}\) Heisenberg model [15; 16; 17; 18; 19; 20; 21]. Moreover, their use has been extended to fermionic [22; 29] and bosonic [30; 31; 32] systems, as well as to molecules [23; 22] and nuclei [24; 25; 26].
In the context of critical behavior, a rigorous foundation of phase transitions was established by Lee and Yang, who considered the zeros of the partition function in the complex plane of the control parameters, for example an external magnetic field or the inverse temperature [33; 34; 35; 36]. This approach relies on the fact that for systems of finite size, the partition function zeros are all complex. However, if a system exhibits a phase transition, the zeros will approach the critical value on the real
Figure 1: Neural network approach to quantum phase transitions. (a) Cubic Ising lattice of interacting spins in a transverse magnetic field, here a system of size \(3\times 3\times 3\). (b) A neural network takes a configuration of the spins, encoded in the vector \(\vec{\sigma}=(\sigma_{1},...,\sigma_{N})\), and outputs the corresponding value of the wave function, \(\psi_{\vec{\delta}}(\vec{\sigma})=(\vec{\sigma}|\psi)\), which depends on the variational parameters in \(\vec{\theta}\). (c) From the fluctuations of the magnetization, we extract the zeros of the moment generating function of the magnetization and investigate their motion in the complex plane as we increase the system size. (d) Above the critical field, \(h>h_{c}\), the zeros remain complex in the thermodynamic limit, and the system is in the paramagnetic phase (PM). At \(h=h_{c}\), the zeros reach the real-axis, signaling a quantum phase transition. For \(h<h_{c}\), the system is in the ferromagnetic phase (FM) with finite magnetization.
axis in the thermodynamic limit of large system sizes, giving rise to a non-analytic behavior of the free energy density [37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48]. Lee-Yang zeros are not just a theoretical concept, but they can also be determined experimentally [49; 50; 51; 52; 53]. In recent years, applications of Lee-Yang theory have been expanded to dynamical quantum phase transitions in quantum many-body systems after a quench [54; 55; 56] and to quantum phase transitions in systems at zero temperature [57; 58].
Here, we combine neural network quantum states with a Lee-Yang theory of quantum phase transitions to predict the critical behavior of interacting spin lattices in one, two, and three dimensions. As illustrated in Fig. 1(a), we consider the transverse-field Ising model in different dimensions and lattice geometries. We then find the ground state of the system as well as the fluctuations of the magnetization using neural network quantum states, Fig. 1(b). From these fluctuations, we determine the complex zeros of the moment generating function of the magnetization and follow their motion as the system size is increased. As illustrated in Fig. 1(c), the zeros remain complex in the thermodynamic limit in case there is no phase transition. On the other hand, if the magnetic field is tuned to its critical value, the zeros of the moment generating function will reach the real axis, signaling a phase transition. Thus, by investigating the positions of the zeros for different magnetic fields, we can map out the phase diagram of the system, Fig. 1(d).
Our manuscript is organized as follows: In Sec. II, we describe the methods that we use throughout this work. In particular, we introduce the transverse-field Ising model, we discuss our calculations of the magnetization cumulants in the ground state using neural network quantum states, and we provide the details of the Lee-Yang theory that we use to predict the critical magnetic field for a given lattice geometry. In Sec. III, we present the results of our calculations. As examples, we first discuss our procedure for the transverse-field Ising model on a one-dimensional chain, a two-dimensional square lattice, and a cubic lattice in three dimensions. We then provide predictions of the critical fields for several other lattice geometries. In Sec. IV, we discuss our results and the role of the coordination number and dimensionality of a given lattice. We also compare our predictions with mean-field theory, which becomes increasingly accurate in higher dimensions. Finally, in Sec. V, we summarize our conclusions. Technical details of our neural network calculations are provided in Appendix A.
## II Methods
### Transverse-field Ising model
We consider the transverse-field Ising model on a lattice of spin-\(1/2\) sites as described by the Hamiltonian
\[\hat{\mathcal{H}}=-J\sum_{\{i,j\}}\hat{\sigma}_{i}^{z}\hat{\sigma}_{j}^{z}-h \sum_{i}\hat{\sigma}_{i}^{x}. \tag{1}\]
Here, the first sum runs over all nearest neighbors, denoted by \(\{i,j\}\), the coupling between them is \(J\), and \(h\) is the transverse magnetic field. The one-dimensional version of this model can be solved analytically and it is known to exhibit a continuous phase transition at the critical field \(h_{c}=J\)[3]. Above the critical field, the system is in a paramagnetic phase with vanishing magnetization. Below it, the system exhibits spontaneous symmetry-breaking and enters a ferromagnetic phase with a non-vanishing magnetization. In the following we will investigate the model in different dimensions and geometries. The two-dimensional systems we consider are square, honeycomb, Kagome, and triangular lattices. In three dimensions, we consider cubic, face-centred cubic, body-centred cubic, and diamond lattices. In all of these cases, we impose periodic boundary conditions, and we compare our predictions with earlier results based on large-scale quantum Monte Carlo simulations [59].
### Neural network quantum states
To find the ground state of the system together with the moments and cumulants of the magnetization, we use neural network quantum states. The neural network quantum states are variational states of the form
\[\psi_{\vec{\theta}}(\vec{\sigma})=\langle\vec{\sigma}|\psi_{\vec{\theta}}\rangle, \tag{2}\]
where the vector \(\vec{\theta}\) contains the variational parameters that we need to determine to minimize the energy and thereby find the ground state. The neural network provides a compressed algorithmic representation of the coefficients of the wavefunction, and it takes a spin configuration in the computational basis as the input, and outputs the wave function in response. The energy is minimized using stochastic reconfiguration, which is an approximate imaginary time-evolution within the variational space of the neural network. Neural network state methodologies have been extended to the time-evolution of quantum systems [60; 8; 61], quantum state tomography [62; 63; 64], as well as finite-temperature equilibrium physics [65; 66; 67]. Importantly, while many other approaches are not able to exploit the computational power of massive parallel computing, neural network quantum states can be implemented with modern graphics processing units.
The energy is evaluated by sampling over the wave function as
\[\langle\hat{\mathcal{H}}\rangle=\frac{\sum_{\vec{\sigma}\vec{\sigma}^{\prime}} \psi^{*}(\vec{\sigma})\langle\vec{\sigma}|\hat{\mathcal{H}}|\vec{\sigma}^{ \prime}\rangle\psi(\vec{\sigma}^{\prime})}{\sum_{\vec{\sigma}^{\prime}}|\psi( \vec{\sigma}^{\prime})|^{2}}=\sum_{\vec{\sigma}}P_{\psi}(\vec{\sigma}) \mathcal{H}_{\text{loc}}(\vec{\sigma}), \tag{3}\]
where we have defined the probability
\[P_{\psi}(\vec{\sigma})=\frac{|\psi(\vec{\sigma})|^{2}}{\sum_{\vec{\sigma}^{ \prime}}|\psi(\vec{\sigma}^{\prime})|^{2}} \tag{4}\]
and the local spin Hamiltonian
\[\mathcal{H}_{\rm loc}(\vec{\sigma})=\sum_{\vec{\sigma}}\langle\hat{\sigma}|\hat{ \mathcal{H}}|\hat{\sigma}^{\prime}\rangle\frac{\psi(\vec{\sigma}^{\prime})}{ \psi(\vec{\sigma})}. \tag{5}\]
Since Eq. (3) is just an average with respect to a normalized probability distribution, Markov-chain Monte Carlo can be used for evaluating the energy and the gradients [68]. It is worth noting that the spin Hamiltonian in Eq. (5) is given by only a few terms in the sum, since only nearest neighbors are coupled. We will also need the expectation value of the total magnetization and its moments, which we express as
\[(\hat{M}_{z}^{n})=\sum_{\vec{\sigma}}P_{\psi}(\vec{\sigma})M_{z}^{n}(\vec{ \sigma}), \tag{6}\]
since \(\hat{M}_{z}\) is diagonal in the computational basis, such that \(M_{z}^{n}(\vec{\sigma})=(\langle\vec{\sigma}|\hat{M}_{z}|\vec{\sigma}\rangle)^ {n}=\langle\vec{\sigma}|\hat{M}_{z}^{n}|\vec{\sigma}\rangle\). Additional details of these calculations are provided in Appendix A.
### Lee-Yang theory
The classical Lee-Yang theory of phase transitions considers the zeros of the partition function in the complex plane of the control parameter, for instance magnetic field or inverse temperature [33, 34, 35, 36]. For finite systems, the partition function zeros are situated away from the real axis. However, in case of a phase transition, they will approach the critical value on the real axis in the thermodynamic limit. One may thereby predict the occurrence of a phase transition by investigating the position of the zeros as the system size is increased. The Lee-Yang theory of phase transitions has found applications in condensed matter physics [37, 40, 41, 44, 45, 46], atomic physics [38] and particle physics [47, 48, 49, 42, 43, 69, 70, 71, 72]. Recently, it has been extended to the zeros of the moment generating function that describes the fluctuations of the order parameter [57, 58] and thereby allows for the detection of quantum phase transitions. Following this approach, we define the moment generating function
\[\chi(s)=\langle e^{s\hat{M}_{z}}\rangle=\frac{1}{g}\sum_{k=1}^{g}\langle\psi_{ k}^{(0)}|e^{s\hat{M}_{z}}|\psi_{k}^{(0)}\rangle, \tag{7}\]
where \(\hat{M}_{z}\) is the total magnetization, and \(s\) is referred to as the counting field. Here, we have included the possibility that the system may have \(g\) degenerate and normalized ground states that we denote by \(|\psi_{k}^{(0)}\rangle\), \(k=1,\ldots,g\). Within this framework, the moment generating function plays the role of the partition function in the classical Lee-Yang theory, and the cumulant generating function, \(\Theta(s)=\ln\chi(s)\), becomes the corresponding free energy. The moments and cumulants of the magnetization are given by derivatives with respect to the counting field as
\[\langle\hat{M}_{z}^{n}\rangle=\partial_{s}^{n}\chi(s)|_{s=0} \tag{8}\]
and
\[\langle\!\langle\hat{M}_{z}^{n}\rangle\!\rangle=\partial_{s}^{n}\Theta(s)|_{s =0}. \tag{9}\]
Importantly, away from a phase transition, the cumulants are expected to grow linearly with the system size, such that the normalized cumulants \(\langle\!\langle\hat{M}_{z}^{n}\rangle\!\rangle/N\) converge to finite values as the number of spins \(N\) approaches infinity. By contrast, at a phase transition, a different scaling behavior is expected due to as non-analytic behavior of the cumulant generating function at \(s=0\)[40, 73]. This non-analytic behavior emerges in the thermodynamic limit,
Figure 2: Extraction of zeros from the cumulants of the magnetization. (a) Extracted zeros for a linear Ising chain in different magnetic fields, \(h=0.6,0.7,0.8,0.9,0.95,1.0,1.05,1.1,1.15,1.2J\) (starting from the lower curve), as a function of the inverse system size, \(1/L\). The solid lines are the finite-size scaling ansatz in Eq. (11), which allows us to determine the value in the thermodynamic limit, where \(1/L\) approaches zero. (b) Similar results for a two-dimensional square lattice with the following values of the magnetic field, \(h=0.5,1.0,1.5,2.0,2.5,2.9,3.0,3.1,3.2,3.3,3.4,3.5J\) (starting from the lower curve). (c) Results for a cubic lattice in three dimensions with \(h=0.0,1.0,2.0,4.0,5.0,5.2,5.4,5.6,5.8,6.0J\) (starting from the lower curve).
if the complex zeros of the moment generating function approach \(s=0\).
To determine the position of the zeros that are closest to \(s=0\), we use the cumulant method that was developed in Refs. [52; 53; 57; 58; 40]. In this approach, the zeros of the moment generating function can be determined from the high cumulants of the order parameter. By doing so for different system sizes, we can then find the convergence points in the thermodynamic limit using finite-size scaling [57; 58; 40; 52]. The cumulant method allows us to express the zeros in terms of the high cumulants of the magnetization. Moreover, for the transverse-field Ising model, the symmetry, \(\hat{U}^{\dagger}\hat{H}\hat{U}=\hat{H}\), with respect to the unitary operator \(\hat{U}=\prod_{i}\hat{\sigma}_{i}^{x}\) that flips all spins, implies that all odd cumulants vanish, and in this model the zeros are purely imaginary [57; 58]. In that case, the zeros that are closest to \(s=0\) can be approximated as [58]
\[\mathrm{Im}(s_{0})\simeq\sqrt{2n(2n+1)|(\hat{M}_{z}^{2n})/\!(\hat{M}_{z}^{2n+2 })|} \tag{10}\]
for large enough cumulant orders, \(n\gg 1\). Thus, in the following, we find the zeros from the high magnetization cumulants, which we calculate using neural network quantum states, and we ensure that the results from Eq. (10) are unchanged if we increase the cumulant order. We then use the scaling ansatz [57; 58]
\[\mathrm{Im}(s_{0})\simeq\mathrm{Im}(s_{0,c})+\alpha L^{-\gamma} \tag{11}\]
to predict the convergence point, \(\mathrm{Im}(s_{0,c})\), in the thermodynamic limit, where \(L\to\infty\) is the linear system size. We carry out this procedure for different magnetic fields to find the critical field, where the zeros reach \(s=0\), and the system exhibits a phase transition.
## III Results
### Extracted zeros
Figure 2 shows zeros obtained for the transverse-field Ising model in one (chain), two (square), and three (cube) dimensions. In each case, we have determined the zeros from Eq. (10) using magnetization cumulants of up to order \(n=10\) for a fixed magnetic field and a given system size. We then obtain the imaginary part of the zeros, and using the finite-size scaling ansatz from Eq. (11), we find the convergence point in the thermodynamic limit as illustrated in the figure. As an example, we see in Fig. 2a how the zeros eventually reach \(s=0\) as we decrease the magnetic field from above to \(h\simeq J\), where the system exhibits a quantum phase transition. In Figs. 2b and 2c, we show similar results for the two-dimensional square lattice and for the three-dimensional cubic lattice. For increased dimensionality, we observe that the quantum phase transitions occurs at higher magnetic fields, as expected for an increasing number of nearest neighbors. In one dimension, we use chains of up to a length of \(L=100\). For the two-dimensional square lattices, we consider systems of sizes up to \(L\times L=10\times 10\), while in three dimensions, the biggest lattice is of size \(L\times L\times L=4\times 4\times 4\). The figure includes small error bars that represent sampling errors in the neural network quantum states. We note that additional errors could potentially arise from small inaccuracies in the variational ground state.
The results for the three different geometries are combined in Fig. 3, where we show the extracted convergence points as a function of the transverse magnetic field. The extrapolation is performed by a constrained minimization of \(\mathrm{Im}(s_{0,c})\), imposing that the imaginary part is not negative. At large magnetic fields, the systems are in the paramagnetic phase with the spins mostly pointing
Figure 3: Convergence points of the zeros in the thermodynamic limit. (a) Convergence points for a linear Ising chain as a function of the magnetic field. A quantum phase transition occurs at \(h_{c}=1.00J\), where the curve exhibits a kink, and the zeros reach the real-axis. Above the critical field, the system is in the paramagnetic phase, while it is in the ferromagnetic phase below it. (b,c) Similar results for the two-dimensional square lattice (b) and the cubic lattice in three dimensions (c).
along the direction of the field. In that case, the zeros of the moment generating function do not converge to \(s=0\) in the thermodynamic limit. By contrast, as the magnetic field is lowered, the zeros eventually reach \(s=0\), signaling a quantum phase transition. Based on our calculations, we estimate the critical fields to be \(h_{c}=1.00J\) for the one-dimensional chain, \(h_{c}=3.05J\) for the two-dimensional square lattice, and \(h_{c}=5.16J\) for the three-dimensional cubic lattice. These values are all within less than \(1\%\) difference from other numerical results [59]. Below the critical field, the zeros also reach \(s=0\), since the system is in the ferromagnetic phase with spontaneous magnetization. In that case, the ground state is two fold-degenerate, and the system will exhibit an abrupt change if a small magnetic field is applied in the \(z\)-direction.
### Critical magnetic fields
We have considered other geometries in two and three dimensions as illustrated in Fig. 4, where we show results for a honeycomb lattice, a Kagome lattice, and a diamond lattice. The honeycomb lattice has two sites per unit cell, and we restrict ourselves to a linear dimension of \(L=8\), which corresponds to \(2\times L^{2}=128\) sites. Similarly, for the Kagome lattice, we go up to \(L=6\), while for the diamond lattice, we consider systems of linear size up to \(L=4\), which corresponds to \(2\times L^{3}=128\) sites. The results in Fig. 4 are qualitatively similar to those in Fig. 3, but with different critical fields. In particular, we find \(h_{c}=2.14J\) for the honeycomb lattice, \(h_{c}=2.95J\) for the Kagome lattice, and \(h_{c}=3.20J\) for the diamond lattice.
The predictions of the critical fields are summarized in Table 1, where we also show results for triangular lattices in two dimensions and face-centred cubic (FCC) and body-centred cubic (BCC) lattices in three dimensions. The results are ordered according to the dimension \(D\) as well as the number of nearest neighbors, the coordination number \(C\). In addition, we indicate the maximum linear dimension that we have used, \(L_{\rm max}\), and the number of sites in a unit cell, \(N_{\rm cell}\). Those parameters control the maximum number of spins in the lattice that we have considered, \(N_{\rm max}\). The last column contains the critical magnetic fields that we predict with the combination of Lee-Yang theory and neural network quantum states. We note that our methodology provides accurate predictions even with a rather low number of lattice sites.
## IV Discussion
### Dimensionality and lattice geometry
The importance of the lattice geometry and the dimension of the system can be understood from the results in Table 1. The chain and the honeycomb lattice, which have the lowest coordination numbers, also have the lowest critical fields. The coordination numbers are larger for the Kagome and the square lattices, where each spin has four nearest neighbors, as well as for the triangular lattice with six nearest neighbors, and we see that the critical fields increase accordingly. For the lattices in three dimensions, the coordination numbers and the critical fields are even larger. Despite this general behavior, we also see that lattices with the same dimension and coordination number (the square and Kagome lattices) still have different critical fields, which are directly related to their specific lattice geometries.
Figure 4: Convergence points of the zeros in the thermodynamic limit. (a) Convergence points for honeycomb lattice as a function of the magnetic field. A quantum phase transition occurs at \(h_{c}\approx 2.14J\), where the curve exhibits a kink, and the zeros reach the real-axis. Above the critical field, the system is in the paramagnetic phase, while it is in the ferromagnetic phase below it. (b,c) Similar results for the Kagome lattice (b) and the diamond lattice (c).
### Mean-field approximation
To better understand the role of the coordination number, we show in Fig. 5 the critical fields as a function of the coordination number. In Fig. 5a, we see the clear trend that the critical fields increase with the coordination number. Indeed, within a simple mean-field approximation, we would expect that the critical field is directly related to the coordination number as \(h_{c}^{\rm MF}=CJ\)[74]. We show this mean-field approximation with a dashed line in the figure and find good qualitative agreement with our predictions. We also see that our results come closer to the mean-field approximation as the dimension of the system is increased. In particular, it is clear that the critical field for the one-dimensional chain is furthest away from the mean-field approximation, while the results for the three-dimensional lattices are much closer.
To further support these observations, we show in Fig. 5b the ratio of the critical fields over the mean-field approximation. This ratio allows us to characterize how the relative deviations from the mean-field prediction decrease for larger coordination numbers. Still, we see that the critical fields are all smaller than the mean-field approximation, which ignores quantum fluctuations. The results for the critical fields in three dimensions are closer to the mean-field approximation as compared with one and two dimensions. This observation is in line with the expectation that mean-field theory becomes more accurate in higher dimensions.
## V Conclusions
We have combined a Lee-Yang theory of quantum phase transitions with neural network quantum states to predict the critical field of the transverse-field Ising model in different dimensions and lattice geometries. Specifically, we have used neural network quantum states to find the ground state of the interacting spin system, which further makes it possible to extract the cumulants of the magnetization. From these cumulants, we determine the complex zeros of the moment-generating function, which reach the real-axis in the thermodynamic limit if the system exhibits a phase transition. Our method works with rather small systems, which in turn allows us to treat lattices in two and three dimensions. Our predictions agree well with results that were obtained using large-scale quantum many-body methods. We have also analyzed the differences between our predictions and a simple mean-field approximation, which becomes increasingly accurate for higher coordination numbers and dimensions. Thanks to the flexibility of neural network quantum states, the method can potentially treat frus
Figure 5: Comparison with mean-field theory. (a) The critical fields are shown as functions of the coordination number, \(C\). The dashed line is a simple mean-field approximation that directly links the critical field to the coordination number as \(h_{c}^{\rm MF}=CJ\). (b) The ratio of the critical fields over the mean-field approximation as functions of the coordination number, \(C\). For large coordination numbers and dimensions, the critical fields approach the mean-field approximation indicated with a dashed line.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Lattice & \(D\) & \(C\) & \(L_{\rm max}\) & \(N_{\rm cell}\) & \(N_{\rm max}\) & \(h_{c}/J\) \\ \hline Chain & 1 & 2 & 60 & 1 & 60 & 1.00 \\ Honeycomb & 2 & 3 & 8 & 2 & 128 & 2.14 \\ Kagome & 2 & 4 & 6 & 3 & 108 & 2.95 \\ Square & 2 & 4 & 10 & 1 & 100 & 3.05 \\ Triangular & 2 & 6 & 10 & 1 & 100 & 4.78 \\ Diamond & 3 & 4 & 4 & 2 & 128 & 3.20 \\ Cubic & 3 & 6 & 4 & 1 & 64 & 5.16 \\ BCC & 3 & 8 & 4 & 1 & 64 & 7.10 \\ FCC & 3 & 12 & 4 & 1 & 64 & 10.8 \\ \hline \end{tabular}
\end{table}
Table 1: Summary of critical fields. For each lattice, we indicate the dimension, \(D\), and the coordination number, \(C\). We also show the maximum linear dimension, \(L_{\rm max}\), and the number of sites per unit cell, \(N_{\rm cell}\), which give the maximum number of sites that we have used as \(N_{\rm max}=N_{\rm cell}\times L_{\rm max}^{D}\). The last column contains our predictions of the critical field.
trated problems, in stark contrast to quantum Monte Carlo approaches that suffer from sign-problems. Our results show that the combination of Lee-Yang theories of phase transitions with neural network quantum states provides a viable way forward to predict the phase behavior of complex quantum many-body systems such as Heisenberg models and fermionic Hubbard models.
###### Acknowledgements.
We acknowledge the computational resources provided by the Aalto Science-IT project and the support from the Finnish National Agency for Education (Opetushallitus), the Academy of Finland through grants (Grants No. 331342 and No. 336243) and the Finnish Centre of Excellence in Quantum Technology (Projects No. 312057 and No. 312299), and from the Jane andatos Erkko Foundation.
## Appendix A Details of calculations
All of our calculations were implemented in Netket 3.3 [68, 75]. In one dimension, we found that a restricted Boltzmann machine works well, while in two dimensions, a group convolutional neural network functions better. In three dimensions, we used a simple and shallow symmetric architecture with real weights, which is sufficient, since the transverse-field Ising model is stoquastic.
In one dimension, we used a simple real restricted Boltzmann machine with a number of hidden units per visible unit of \(\alpha=20\). For each training iteration, 8192 samples were used, taken from 128 parallel chains. The network was trained for 3000 iterations with a learning rate of 0.02, and then for further 1000 iterations with a learning rate of 0.01. Stochastic reconfiguration with a diagonal shift of 0.01 was used.
In two dimensions, we used a group convolutional neural network [76, 15] defined over the group of all translations with four layers of feature dimension 8 each and complex parameters. We used 32 parallel Markov chains constructed using a Metropolis algorithm with local updates, and we took 1024 samples per iteration step. Stochastic reconfiguration with a diagonal shift of 0.01 was used, and the network was trained with a learning rate of 0.01 for 2000 iterations. If necessary, we trained the network multiple times and chose the network with the lowest variance of the energy.
In three dimensions, we applied a dense symmetric layer with real weights and 40 features to the input, and we then activated it with the ReLu function, which was then summed over to obtain the wave function. We used a local Metropolis update Markov-chain with 128 parallel chains and 8192 samples per training step. A learning rate of 0.002 and stochastic reconfiguration with a diagonal shift of 0.01 were applied. We then trained the network for 2000 iterations. If necessary, we ran this training multiple times for the same configuration (system size and magnetic field), and we chose the network parameters that resulted in the lowest variance of the ground state energy, so that the network was as similar as possible to an eigenstate of the Hamiltonian.
We evaluated the moments of the magnetization using regular sampling with an unbiased Markov chain, since
\[\langle\hat{M}_{z}^{n}\rangle=\sum_{\bar{\sigma}}P_{\psi}(\bar{\sigma})M_{z}^ {n}=\sum_{\bar{\sigma}}P_{\sigma}(\bar{\sigma})\left[\sum_{i}\sigma_{i}\right] ^{n}. \tag{10}\]
For the two- and three-dimensional lattices with up to \(N_{\text{max}}=128\) sites, we took \(100\times 1024\times 128\simeq 13\) million samples. For the one-dimensional lattice, we took up to \(1000\times 1024\times 128\simeq 270\) million samples. For the sampling, we used 128 parallel chains and discarded the first 64 entries. From the moments, we then obtained the cumulants using a standard recursion relation between them.
|